Nov 25 10:31:41 crc systemd[1]: Starting Kubernetes Kubelet... Nov 25 10:31:41 crc restorecon[4754]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:41 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 10:31:42 crc restorecon[4754]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 10:31:42 crc restorecon[4754]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 25 10:31:43 crc kubenswrapper[4813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 10:31:43 crc kubenswrapper[4813]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 25 10:31:43 crc kubenswrapper[4813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 10:31:43 crc kubenswrapper[4813]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 10:31:43 crc kubenswrapper[4813]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 25 10:31:43 crc kubenswrapper[4813]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.378334 4813 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385524 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385569 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385579 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385585 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385591 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385597 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385603 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385610 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385615 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385621 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385627 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385632 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385638 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385644 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385650 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385656 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385663 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385671 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385700 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385709 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385729 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385736 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385743 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385750 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385757 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385763 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385770 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385776 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385782 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385789 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385795 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385801 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385807 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385813 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385819 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385825 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385831 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385837 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385843 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385849 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385854 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385860 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385870 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385879 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385886 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385894 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385900 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385906 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385913 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385921 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385926 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385931 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385936 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385942 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385946 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385953 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385958 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385964 4813 feature_gate.go:330] unrecognized feature gate: Example Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385968 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385973 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385978 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385983 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385989 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.385994 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.386000 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.386006 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.386012 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.386018 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.386024 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.386030 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.386035 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386152 4813 flags.go:64] FLAG: --address="0.0.0.0" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386164 4813 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386174 4813 flags.go:64] FLAG: --anonymous-auth="true" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386182 4813 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386191 4813 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386197 4813 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386205 4813 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386212 4813 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386218 4813 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386224 4813 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386230 4813 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386236 4813 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386242 4813 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386249 4813 flags.go:64] FLAG: --cgroup-root="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386255 4813 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386260 4813 flags.go:64] FLAG: --client-ca-file="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386266 4813 flags.go:64] FLAG: --cloud-config="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386272 4813 flags.go:64] FLAG: --cloud-provider="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386278 4813 flags.go:64] FLAG: --cluster-dns="[]" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386284 4813 flags.go:64] FLAG: --cluster-domain="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386290 4813 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386296 4813 flags.go:64] FLAG: --config-dir="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386301 4813 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386307 4813 flags.go:64] FLAG: --container-log-max-files="5" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386315 4813 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386321 4813 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386327 4813 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386333 4813 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386338 4813 flags.go:64] FLAG: --contention-profiling="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386344 4813 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386350 4813 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386356 4813 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386361 4813 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386368 4813 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386374 4813 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386379 4813 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386386 4813 flags.go:64] FLAG: --enable-load-reader="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386391 4813 flags.go:64] FLAG: --enable-server="true" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386397 4813 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386405 4813 flags.go:64] FLAG: --event-burst="100" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386411 4813 flags.go:64] FLAG: --event-qps="50" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386416 4813 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386422 4813 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386427 4813 flags.go:64] FLAG: --eviction-hard="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386434 4813 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386441 4813 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386449 4813 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386456 4813 flags.go:64] FLAG: --eviction-soft="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386463 4813 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386469 4813 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386475 4813 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386481 4813 flags.go:64] FLAG: --experimental-mounter-path="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386489 4813 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386495 4813 flags.go:64] FLAG: --fail-swap-on="true" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386500 4813 flags.go:64] FLAG: --feature-gates="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386507 4813 flags.go:64] FLAG: --file-check-frequency="20s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386513 4813 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386519 4813 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386525 4813 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386531 4813 flags.go:64] FLAG: --healthz-port="10248" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386536 4813 flags.go:64] FLAG: --help="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386542 4813 flags.go:64] FLAG: --hostname-override="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386548 4813 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386553 4813 flags.go:64] FLAG: --http-check-frequency="20s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386559 4813 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386565 4813 flags.go:64] FLAG: --image-credential-provider-config="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386570 4813 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386576 4813 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386583 4813 flags.go:64] FLAG: --image-service-endpoint="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386588 4813 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386594 4813 flags.go:64] FLAG: --kube-api-burst="100" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386600 4813 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386606 4813 flags.go:64] FLAG: --kube-api-qps="50" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386611 4813 flags.go:64] FLAG: --kube-reserved="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386617 4813 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386622 4813 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386628 4813 flags.go:64] FLAG: --kubelet-cgroups="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386634 4813 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386639 4813 flags.go:64] FLAG: --lock-file="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386645 4813 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386650 4813 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386656 4813 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386671 4813 flags.go:64] FLAG: --log-json-split-stream="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386676 4813 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386717 4813 flags.go:64] FLAG: --log-text-split-stream="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386725 4813 flags.go:64] FLAG: --logging-format="text" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386730 4813 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386737 4813 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386743 4813 flags.go:64] FLAG: --manifest-url="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386748 4813 flags.go:64] FLAG: --manifest-url-header="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386757 4813 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386763 4813 flags.go:64] FLAG: --max-open-files="1000000" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386771 4813 flags.go:64] FLAG: --max-pods="110" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386777 4813 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386783 4813 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386788 4813 flags.go:64] FLAG: --memory-manager-policy="None" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386794 4813 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386800 4813 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386806 4813 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386812 4813 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386827 4813 flags.go:64] FLAG: --node-status-max-images="50" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386833 4813 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386839 4813 flags.go:64] FLAG: --oom-score-adj="-999" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386845 4813 flags.go:64] FLAG: --pod-cidr="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386851 4813 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386859 4813 flags.go:64] FLAG: --pod-manifest-path="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386865 4813 flags.go:64] FLAG: --pod-max-pids="-1" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386870 4813 flags.go:64] FLAG: --pods-per-core="0" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386876 4813 flags.go:64] FLAG: --port="10250" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386882 4813 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386887 4813 flags.go:64] FLAG: --provider-id="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386893 4813 flags.go:64] FLAG: --qos-reserved="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386898 4813 flags.go:64] FLAG: --read-only-port="10255" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386904 4813 flags.go:64] FLAG: --register-node="true" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386910 4813 flags.go:64] FLAG: --register-schedulable="true" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386915 4813 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386925 4813 flags.go:64] FLAG: --registry-burst="10" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386931 4813 flags.go:64] FLAG: --registry-qps="5" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386939 4813 flags.go:64] FLAG: --reserved-cpus="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386944 4813 flags.go:64] FLAG: --reserved-memory="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386952 4813 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386958 4813 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386964 4813 flags.go:64] FLAG: --rotate-certificates="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386969 4813 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386975 4813 flags.go:64] FLAG: --runonce="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386981 4813 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386986 4813 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386992 4813 flags.go:64] FLAG: --seccomp-default="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.386999 4813 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387007 4813 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387021 4813 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387031 4813 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387039 4813 flags.go:64] FLAG: --storage-driver-password="root" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387046 4813 flags.go:64] FLAG: --storage-driver-secure="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387053 4813 flags.go:64] FLAG: --storage-driver-table="stats" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387058 4813 flags.go:64] FLAG: --storage-driver-user="root" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387064 4813 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387071 4813 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387077 4813 flags.go:64] FLAG: --system-cgroups="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387084 4813 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387093 4813 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387099 4813 flags.go:64] FLAG: --tls-cert-file="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387104 4813 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387111 4813 flags.go:64] FLAG: --tls-min-version="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387117 4813 flags.go:64] FLAG: --tls-private-key-file="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387122 4813 flags.go:64] FLAG: --topology-manager-policy="none" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387128 4813 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387133 4813 flags.go:64] FLAG: --topology-manager-scope="container" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387139 4813 flags.go:64] FLAG: --v="2" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387148 4813 flags.go:64] FLAG: --version="false" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387156 4813 flags.go:64] FLAG: --vmodule="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387163 4813 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387169 4813 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387335 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387344 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387349 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387354 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387359 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387364 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387369 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387375 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387379 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387384 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387389 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387394 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387399 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387404 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387408 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387414 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387419 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387424 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387428 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387433 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387438 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387443 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387447 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387452 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387457 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387461 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387468 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387474 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387480 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387485 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387490 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387495 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387500 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387504 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387510 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387518 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387524 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387530 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387537 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387542 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387546 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387551 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387556 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387561 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387566 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387571 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387575 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387581 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387585 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387591 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387595 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387602 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387608 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387612 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387618 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387623 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387629 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387634 4813 feature_gate.go:330] unrecognized feature gate: Example Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387640 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387645 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387650 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387655 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387661 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387666 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387672 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387676 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387736 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387748 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387756 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387764 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.387771 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.387786 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.399805 4813 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.399867 4813 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400003 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400016 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400027 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400038 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400047 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400058 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400067 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400075 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400083 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400092 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400100 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400109 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400117 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400126 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400134 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400143 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400152 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400160 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400168 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400176 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400186 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400198 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400208 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400217 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400226 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400234 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400243 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400251 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400260 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400268 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400276 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400285 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400297 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400307 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400316 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400325 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400335 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400344 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400353 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400362 4813 feature_gate.go:330] unrecognized feature gate: Example Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400372 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400381 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400390 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400401 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400412 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400422 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400431 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400441 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400449 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400458 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400470 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400479 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400488 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400499 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400510 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400520 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400529 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400538 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400548 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400557 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400566 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400574 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400582 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400591 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400599 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400607 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400615 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400625 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400633 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400642 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400650 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.400665 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400926 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400941 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400952 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400961 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400970 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400979 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400989 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.400998 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401007 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401016 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401026 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401035 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401045 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401055 4813 feature_gate.go:330] unrecognized feature gate: Example Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401064 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401073 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401082 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401091 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401099 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401108 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401116 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401125 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401133 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401144 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401155 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401164 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401174 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401182 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401191 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401199 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401207 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401216 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401224 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401233 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401244 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401253 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401262 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401275 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401286 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401297 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401307 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401318 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401330 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401338 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401347 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401355 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401363 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401372 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401383 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401394 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401404 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401412 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401422 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401432 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401448 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401469 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401481 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401492 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401503 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401513 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401524 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401534 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401545 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401555 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401566 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401576 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401587 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401597 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401611 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401625 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.401637 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.401654 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.401958 4813 server.go:940] "Client rotation is on, will bootstrap in background" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.409144 4813 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.409351 4813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.411538 4813 server.go:997] "Starting client certificate rotation" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.411595 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.411762 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-06 11:16:01.674287371 +0000 UTC Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.411842 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 264h44m18.262448644s for next certificate rotation Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.446712 4813 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.448466 4813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.471243 4813 log.go:25] "Validated CRI v1 runtime API" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.509193 4813 log.go:25] "Validated CRI v1 image API" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.511363 4813 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.518771 4813 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-25-10-26-48-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.518810 4813 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.540485 4813 manager.go:217] Machine: {Timestamp:2025-11-25 10:31:43.536166728 +0000 UTC m=+0.665876664 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:85f815b0-dc24-49ca-a7fb-6bc8e198cbb1 BootID:1b8f6803-8c92-44d2-bc35-374b0f00608e Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:11:9c:b1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:11:9c:b1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2d:ba:0d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:4c:ab:62 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c4:32:3f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:46:be:49 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:eb:db:2a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:8a:6b:21:6d:d8:1d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:8e:bd:54:25:73:84 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.541181 4813 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.541534 4813 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.542260 4813 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.542829 4813 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.542896 4813 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.543497 4813 topology_manager.go:138] "Creating topology manager with none policy" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.543519 4813 container_manager_linux.go:303] "Creating device plugin manager" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.544271 4813 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.544394 4813 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.545521 4813 state_mem.go:36] "Initialized new in-memory state store" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.545793 4813 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.552314 4813 kubelet.go:418] "Attempting to sync node with API server" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.552362 4813 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.552401 4813 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.552418 4813 kubelet.go:324] "Adding apiserver pod source" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.552437 4813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.556674 4813 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.558028 4813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.561085 4813 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.562424 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.562424 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.562555 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.91:6443: connect: connection refused" logger="UnhandledError" Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.562590 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.91:6443: connect: connection refused" logger="UnhandledError" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.563128 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.563173 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.563194 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.563209 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.563461 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.563492 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.563506 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.563529 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.563543 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.563559 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.563643 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.563751 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.564799 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.565773 4813 server.go:1280] "Started kubelet" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.566642 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.566951 4813 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.566953 4813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.567889 4813 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 25 10:31:43 crc systemd[1]: Started Kubernetes Kubelet. Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.568760 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.568807 4813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.569253 4813 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.569268 4813 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.568909 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:11:29.743649735 +0000 UTC Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.569354 4813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.569619 4813 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.570922 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" interval="200ms" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.571132 4813 factory.go:55] Registering systemd factory Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.571265 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.571373 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.91:6443: connect: connection refused" logger="UnhandledError" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.572965 4813 factory.go:221] Registration of the systemd container factory successfully Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.574034 4813 server.go:460] "Adding debug handlers to kubelet server" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.574222 4813 factory.go:153] Registering CRI-O factory Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.574312 4813 factory.go:221] Registration of the crio container factory successfully Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.574452 4813 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.574547 4813 factory.go:103] Registering Raw factory Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.574620 4813 manager.go:1196] Started watching for new ooms in manager Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.577581 4813 manager.go:319] Starting recovery of all containers Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.575951 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.91:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b39520a200b42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 10:31:43.56565485 +0000 UTC m=+0.695364756,LastTimestamp:2025-11-25 10:31:43.56565485 +0000 UTC m=+0.695364756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581151 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581204 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581215 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581226 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581240 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581251 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581262 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581273 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581285 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581293 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581303 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581311 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581321 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581333 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581343 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581351 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581359 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581369 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581379 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581389 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581399 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581410 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581421 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581434 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581456 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581469 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581482 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581493 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581502 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581511 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581523 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581560 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581569 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581591 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581601 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581612 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581621 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581632 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581641 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581651 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.581661 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584202 4813 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584271 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584286 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584297 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584323 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584346 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584357 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584383 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584398 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584422 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584446 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584456 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584483 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584510 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584526 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584539 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584551 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584562 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584604 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584630 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584640 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584668 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584698 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584731 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584752 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584776 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584786 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584796 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584820 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584836 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584846 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584855 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584864 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584877 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584901 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584923 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584932 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584941 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584962 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.584971 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585021 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585048 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585074 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585084 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585093 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585104 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585114 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585125 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585135 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585146 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585157 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585788 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585853 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585874 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585890 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585905 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585920 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585934 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585948 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585962 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585978 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.585992 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586006 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586020 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586065 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586083 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586102 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586118 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586142 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586175 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586196 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586215 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586238 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586256 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586271 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586283 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586297 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586311 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586325 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586341 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586355 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586370 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586382 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586394 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586407 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586423 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586437 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586453 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586471 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586490 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586504 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586518 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586531 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586547 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586561 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586575 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586587 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586601 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586617 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586632 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586646 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586659 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586673 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586709 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586721 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586735 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586748 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586761 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586774 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586788 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586800 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586813 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586827 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586841 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586854 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586866 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586879 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586893 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586906 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586919 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586931 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586945 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586957 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586970 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586983 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.586997 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587010 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587023 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587037 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587052 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587064 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587078 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587090 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587103 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587115 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587132 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587149 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587165 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587180 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587276 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587294 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587307 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587321 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587333 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587345 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587359 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587377 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587396 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587414 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587430 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587443 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587457 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587469 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587484 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587499 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587514 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587531 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587545 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587561 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587574 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587590 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587606 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587619 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587634 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587647 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587660 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587674 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587707 4813 reconstruct.go:97] "Volume reconstruction finished" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.587717 4813 reconciler.go:26] "Reconciler: start to sync state" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.595070 4813 manager.go:324] Recovery completed Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.604952 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.607193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.607293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.607346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.609541 4813 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.609560 4813 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.609581 4813 state_mem.go:36] "Initialized new in-memory state store" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.618249 4813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.620176 4813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.620212 4813 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.620238 4813 kubelet.go:2335] "Starting kubelet main sync loop" Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.620660 4813 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 25 10:31:43 crc kubenswrapper[4813]: W1125 10:31:43.621247 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.621315 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.91:6443: connect: connection refused" logger="UnhandledError" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.633948 4813 policy_none.go:49] "None policy: Start" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.634919 4813 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.634947 4813 state_mem.go:35] "Initializing new in-memory state store" Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.669962 4813 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.693928 4813 manager.go:334] "Starting Device Plugin manager" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.693977 4813 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.693990 4813 server.go:79] "Starting device plugin registration server" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.694440 4813 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.694457 4813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.694570 4813 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.698727 4813 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.698762 4813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.703237 4813 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.720822 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.721067 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.723446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.723499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.723510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.723702 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.724042 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.724109 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.724762 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.724823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.724838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.725023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.725164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.725242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.725214 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.725357 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.725047 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.726309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.726334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.726338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.726345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.726356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.726366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.726469 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.726815 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.727523 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.727086 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.727556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.727565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.727669 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.727854 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.727897 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.728030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.728057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.728065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.728449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.728473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.728482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.728713 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.728742 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.729371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.729396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.729515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.729538 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.729547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.729406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.772317 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" interval="400ms" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.789329 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.789862 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.789977 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.790072 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.790158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.790283 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.790375 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.790479 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.790633 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.790849 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.791010 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.791139 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.791363 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.791440 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.791466 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.794564 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.795707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.795759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.795787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.795814 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.796303 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.91:6443: connect: connection refused" node="crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892318 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892394 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892478 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892514 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892484 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892573 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892514 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892637 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892669 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892725 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892727 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892759 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892778 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892790 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892862 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892829 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892811 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892911 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892974 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892985 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.892582 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.893049 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.893011 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.893094 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.893138 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.893162 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.893113 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.893232 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.996896 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.998236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.998267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.998277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:43 crc kubenswrapper[4813]: I1125 10:31:43.998332 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 10:31:43 crc kubenswrapper[4813]: E1125 10:31:43.998747 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.91:6443: connect: connection refused" node="crc" Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.049087 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.052719 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.084014 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.092108 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 10:31:44 crc kubenswrapper[4813]: W1125 10:31:44.096122 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-46f94a3e4ad8ef5a3ba9d1f05761ac92146e56adcf21513e0a5ab457e2d6cae4 WatchSource:0}: Error finding container 46f94a3e4ad8ef5a3ba9d1f05761ac92146e56adcf21513e0a5ab457e2d6cae4: Status 404 returned error can't find the container with id 46f94a3e4ad8ef5a3ba9d1f05761ac92146e56adcf21513e0a5ab457e2d6cae4 Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.097253 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 10:31:44 crc kubenswrapper[4813]: W1125 10:31:44.117832 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-b48ae3eda7da419c7df127f3330b8037f8605b31b6a88778bd00eba18d3a0d0d WatchSource:0}: Error finding container b48ae3eda7da419c7df127f3330b8037f8605b31b6a88778bd00eba18d3a0d0d: Status 404 returned error can't find the container with id b48ae3eda7da419c7df127f3330b8037f8605b31b6a88778bd00eba18d3a0d0d Nov 25 10:31:44 crc kubenswrapper[4813]: W1125 10:31:44.127933 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-b067672b86e8c3fe9a90b4375908f788c65bdf9c27f150daf76ce99a24dbb729 WatchSource:0}: Error finding container b067672b86e8c3fe9a90b4375908f788c65bdf9c27f150daf76ce99a24dbb729: Status 404 returned error can't find the container with id b067672b86e8c3fe9a90b4375908f788c65bdf9c27f150daf76ce99a24dbb729 Nov 25 10:31:44 crc kubenswrapper[4813]: E1125 10:31:44.174081 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" interval="800ms" Nov 25 10:31:44 crc kubenswrapper[4813]: E1125 10:31:44.217185 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.91:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b39520a200b42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 10:31:43.56565485 +0000 UTC m=+0.695364756,LastTimestamp:2025-11-25 10:31:43.56565485 +0000 UTC m=+0.695364756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.398878 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.400112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.400159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.400172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.400199 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 10:31:44 crc kubenswrapper[4813]: E1125 10:31:44.400552 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.91:6443: connect: connection refused" node="crc" Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.567907 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.569912 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:43:33.374681732 +0000 UTC Nov 25 10:31:44 crc kubenswrapper[4813]: W1125 10:31:44.614432 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:44 crc kubenswrapper[4813]: E1125 10:31:44.614526 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.91:6443: connect: connection refused" logger="UnhandledError" Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.626123 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d051cabf028a764117438548880b8335ef7596b11d232652dc5c84073b1eb423"} Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.627263 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b067672b86e8c3fe9a90b4375908f788c65bdf9c27f150daf76ce99a24dbb729"} Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.628274 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f02d7250f66f34f48689e0ab7d1250e90361746d71ce37d0f621ebe35029adb7"} Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.629170 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b48ae3eda7da419c7df127f3330b8037f8605b31b6a88778bd00eba18d3a0d0d"} Nov 25 10:31:44 crc kubenswrapper[4813]: I1125 10:31:44.630250 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"46f94a3e4ad8ef5a3ba9d1f05761ac92146e56adcf21513e0a5ab457e2d6cae4"} Nov 25 10:31:44 crc kubenswrapper[4813]: W1125 10:31:44.775314 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:44 crc kubenswrapper[4813]: E1125 10:31:44.775388 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.91:6443: connect: connection refused" logger="UnhandledError" Nov 25 10:31:44 crc kubenswrapper[4813]: E1125 10:31:44.974810 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" interval="1.6s" Nov 25 10:31:45 crc kubenswrapper[4813]: W1125 10:31:45.062063 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:45 crc kubenswrapper[4813]: E1125 10:31:45.062149 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.91:6443: connect: connection refused" logger="UnhandledError" Nov 25 10:31:45 crc kubenswrapper[4813]: W1125 10:31:45.127937 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:45 crc kubenswrapper[4813]: E1125 10:31:45.128598 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.91:6443: connect: connection refused" logger="UnhandledError" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.201184 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.202871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.202922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.202935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.202964 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 10:31:45 crc kubenswrapper[4813]: E1125 10:31:45.203402 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.91:6443: connect: connection refused" node="crc" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.567937 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.570183 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:58:31.696526411 +0000 UTC Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.634369 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="92dc0767a72d1306948ef6e91b0807d8954b027eb097d3e64b864812507deb4f" exitCode=0 Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.634511 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"92dc0767a72d1306948ef6e91b0807d8954b027eb097d3e64b864812507deb4f"} Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.634594 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.635756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.635784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.635794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.637394 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85" exitCode=0 Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.637501 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85"} Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.637554 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.638851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.638884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.638893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.640085 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.640164 4813 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760" exitCode=0 Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.640231 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.640255 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760"} Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.640941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.640981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.640997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.641345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.641377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.641387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.642601 4813 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="312228ac2cd8a213ffcac9564ff0abe8b6f330abca932992170d2f6ccea5edb3" exitCode=0 Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.642706 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"312228ac2cd8a213ffcac9564ff0abe8b6f330abca932992170d2f6ccea5edb3"} Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.642762 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.643739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.643773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.643785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.646577 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134"} Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.646609 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573"} Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.646625 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7"} Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.646638 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be"} Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.646931 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.648033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.648069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:45 crc kubenswrapper[4813]: I1125 10:31:45.648085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.351855 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.568180 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.570585 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:40:07.999475213 +0000 UTC Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.570641 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 554h8m21.428837405s for next certificate rotation Nov 25 10:31:46 crc kubenswrapper[4813]: E1125 10:31:46.577110 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" interval="3.2s" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.651580 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4"} Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.651633 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff"} Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.651648 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953"} Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.651611 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.652546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.652582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.652593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.654143 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"71ac844ac0be61d9aa56028670f20db4c9c600feffd4355d9636253b7d50e18d"} Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.654160 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.655216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.655249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.655261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.659187 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f9555defdcfa64eaa6586b0282eb694b978ad2a6ffdcbc7888aa1e2092eb171e" exitCode=0 Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.659284 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f9555defdcfa64eaa6586b0282eb694b978ad2a6ffdcbc7888aa1e2092eb171e"} Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.659313 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.660199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.660242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.660258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.663594 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910"} Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.663657 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.663653 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c"} Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.663997 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f"} Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.664022 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e"} Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.664503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.664533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.664544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.803743 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.805230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.805285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.805297 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:46 crc kubenswrapper[4813]: I1125 10:31:46.805320 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 10:31:46 crc kubenswrapper[4813]: E1125 10:31:46.806171 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.91:6443: connect: connection refused" node="crc" Nov 25 10:31:46 crc kubenswrapper[4813]: W1125 10:31:46.812052 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:46 crc kubenswrapper[4813]: E1125 10:31:46.812152 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.91:6443: connect: connection refused" logger="UnhandledError" Nov 25 10:31:46 crc kubenswrapper[4813]: W1125 10:31:46.886060 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:31:46 crc kubenswrapper[4813]: E1125 10:31:46.886136 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.91:6443: connect: connection refused" logger="UnhandledError" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.667855 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c1c3965c3df4fbcae1fd87ecc86a359203bb7808b46ff96f57910f3823990023" exitCode=0 Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.667917 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c1c3965c3df4fbcae1fd87ecc86a359203bb7808b46ff96f57910f3823990023"} Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.668002 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.669050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.669088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.669101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.672590 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643"} Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.672653 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.672674 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.672737 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.673043 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.673127 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.674070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.674095 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.674104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.674171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.674199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.674212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.675274 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.675337 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.675368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.675544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.675715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.675741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:47 crc kubenswrapper[4813]: I1125 10:31:47.879410 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.640667 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.678372 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f456807edc356198a6818535fbf1a8655fe044256f79e800a4b01e50e16fb439"} Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.678424 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cfadc4a3b9f58e9c60a45c067059500a9ddd478b4b3f9ecb00a788732dd948f3"} Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.678463 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.678520 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.678601 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.678634 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.679522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.679547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.679523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.679572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.679584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.679556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.680404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.680430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.680440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:48 crc kubenswrapper[4813]: I1125 10:31:48.786421 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.352647 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.352761 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.688526 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.688601 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.688519 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"47c28a57d35d4a2274eae81cce464e47a97053f9fb053f844c0fff1afdd59f0c"} Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.688777 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0d80fb2709b14ee7339075a3080b154a71d5f747f6f346959239a05ed1b81dd8"} Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.688815 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"737112f4aeb499493f525ebbf68d32cf74b3115b54f73c98e15c6db912b2856c"} Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.689847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.689894 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.689917 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.690119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.690158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:49 crc kubenswrapper[4813]: I1125 10:31:49.690174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.007365 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.009159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.009218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.009237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.009270 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.622898 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.623165 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.625026 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.625100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.625125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.628098 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.693807 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.693823 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.693933 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.698385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.698504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.698537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.699163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.699168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.699211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.699237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.699245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:50 crc kubenswrapper[4813]: I1125 10:31:50.699272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:53 crc kubenswrapper[4813]: I1125 10:31:53.652344 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 25 10:31:53 crc kubenswrapper[4813]: I1125 10:31:53.652638 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:53 crc kubenswrapper[4813]: I1125 10:31:53.654512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:53 crc kubenswrapper[4813]: I1125 10:31:53.654552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:53 crc kubenswrapper[4813]: I1125 10:31:53.654565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:53 crc kubenswrapper[4813]: E1125 10:31:53.703331 4813 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 10:31:55 crc kubenswrapper[4813]: I1125 10:31:55.543582 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:55 crc kubenswrapper[4813]: I1125 10:31:55.543922 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:55 crc kubenswrapper[4813]: I1125 10:31:55.546303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:55 crc kubenswrapper[4813]: I1125 10:31:55.546372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:55 crc kubenswrapper[4813]: I1125 10:31:55.546394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:55 crc kubenswrapper[4813]: I1125 10:31:55.549019 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:31:55 crc kubenswrapper[4813]: I1125 10:31:55.706574 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:55 crc kubenswrapper[4813]: I1125 10:31:55.707513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:55 crc kubenswrapper[4813]: I1125 10:31:55.707566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:55 crc kubenswrapper[4813]: I1125 10:31:55.707580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:57 crc kubenswrapper[4813]: W1125 10:31:57.468834 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 25 10:31:57 crc kubenswrapper[4813]: I1125 10:31:57.468952 4813 trace.go:236] Trace[483511619]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 10:31:47.467) (total time: 10001ms): Nov 25 10:31:57 crc kubenswrapper[4813]: Trace[483511619]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:31:57.468) Nov 25 10:31:57 crc kubenswrapper[4813]: Trace[483511619]: [10.001452326s] [10.001452326s] END Nov 25 10:31:57 crc kubenswrapper[4813]: E1125 10:31:57.468994 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 25 10:31:57 crc kubenswrapper[4813]: W1125 10:31:57.552329 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 25 10:31:57 crc kubenswrapper[4813]: I1125 10:31:57.552468 4813 trace.go:236] Trace[1612572857]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 10:31:47.550) (total time: 10001ms): Nov 25 10:31:57 crc kubenswrapper[4813]: Trace[1612572857]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:31:57.552) Nov 25 10:31:57 crc kubenswrapper[4813]: Trace[1612572857]: [10.001546589s] [10.001546589s] END Nov 25 10:31:57 crc kubenswrapper[4813]: E1125 10:31:57.552504 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 25 10:31:57 crc kubenswrapper[4813]: I1125 10:31:57.568901 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 25 10:31:57 crc kubenswrapper[4813]: I1125 10:31:57.770347 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43552->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 25 10:31:57 crc kubenswrapper[4813]: I1125 10:31:57.770426 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43552->192.168.126.11:17697: read: connection reset by peer" Nov 25 10:31:57 crc kubenswrapper[4813]: I1125 10:31:57.770426 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43558->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 25 10:31:57 crc kubenswrapper[4813]: I1125 10:31:57.770571 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43558->192.168.126.11:17697: read: connection reset by peer" Nov 25 10:31:58 crc kubenswrapper[4813]: I1125 10:31:58.641200 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 10:31:58 crc kubenswrapper[4813]: I1125 10:31:58.641460 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 10:31:58 crc kubenswrapper[4813]: I1125 10:31:58.715721 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 10:31:58 crc kubenswrapper[4813]: I1125 10:31:58.717429 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643" exitCode=255 Nov 25 10:31:58 crc kubenswrapper[4813]: I1125 10:31:58.717470 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643"} Nov 25 10:31:58 crc kubenswrapper[4813]: I1125 10:31:58.717642 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:58 crc kubenswrapper[4813]: I1125 10:31:58.718740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:58 crc kubenswrapper[4813]: I1125 10:31:58.718908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:58 crc kubenswrapper[4813]: I1125 10:31:58.719035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:58 crc kubenswrapper[4813]: I1125 10:31:58.719821 4813 scope.go:117] "RemoveContainer" containerID="e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643" Nov 25 10:31:58 crc kubenswrapper[4813]: I1125 10:31:58.896875 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 10:31:58 crc kubenswrapper[4813]: I1125 10:31:58.897204 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.174639 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.174951 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.176481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.176539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.176553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.218928 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.353159 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.353272 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.723602 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.726763 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32"} Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.726891 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.727015 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.728666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.728738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.728752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.728810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.728856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.728881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.729513 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:31:59 crc kubenswrapper[4813]: I1125 10:31:59.749508 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 25 10:32:00 crc kubenswrapper[4813]: I1125 10:32:00.730259 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:32:00 crc kubenswrapper[4813]: I1125 10:32:00.730452 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:32:00 crc kubenswrapper[4813]: I1125 10:32:00.731810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:00 crc kubenswrapper[4813]: I1125 10:32:00.731896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:00 crc kubenswrapper[4813]: I1125 10:32:00.731920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:00 crc kubenswrapper[4813]: I1125 10:32:00.732169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:00 crc kubenswrapper[4813]: I1125 10:32:00.732239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:00 crc kubenswrapper[4813]: I1125 10:32:00.732253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:00 crc kubenswrapper[4813]: I1125 10:32:00.992411 4813 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.563778 4813 apiserver.go:52] "Watching apiserver" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.569100 4813 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.569370 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.569784 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.569854 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:01 crc kubenswrapper[4813]: E1125 10:32:01.569946 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.570041 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.570135 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 10:32:01 crc kubenswrapper[4813]: E1125 10:32:01.570187 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.570225 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.570262 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:01 crc kubenswrapper[4813]: E1125 10:32:01.570410 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.571668 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.571834 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.571898 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.571986 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.572405 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.572589 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.574124 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.575135 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.576447 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.615560 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.633259 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.651948 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.665044 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.670318 4813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.676822 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.689039 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:01 crc kubenswrapper[4813]: I1125 10:32:01.699665 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.620632 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.620726 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:03 crc kubenswrapper[4813]: E1125 10:32:03.620760 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:03 crc kubenswrapper[4813]: E1125 10:32:03.620866 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.620918 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:03 crc kubenswrapper[4813]: E1125 10:32:03.620982 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.633025 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.644243 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.644284 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.647649 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.655700 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.657330 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.667824 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.690568 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.710435 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.722848 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.731745 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: E1125 10:32:03.743608 4813 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.743769 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.752895 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.762228 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.770539 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.779533 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.791389 4813 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 10:32:03 crc kubenswrapper[4813]: E1125 10:32:03.897213 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.898305 4813 trace.go:236] Trace[745448813]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 10:31:50.125) (total time: 13772ms): Nov 25 10:32:03 crc kubenswrapper[4813]: Trace[745448813]: ---"Objects listed" error: 13772ms (10:32:03.898) Nov 25 10:32:03 crc kubenswrapper[4813]: Trace[745448813]: [13.772354858s] [13.772354858s] END Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.898335 4813 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.899569 4813 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.899816 4813 trace.go:236] Trace[1220557613]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 10:31:51.890) (total time: 12008ms): Nov 25 10:32:03 crc kubenswrapper[4813]: Trace[1220557613]: ---"Objects listed" error: 12008ms (10:32:03.899) Nov 25 10:32:03 crc kubenswrapper[4813]: Trace[1220557613]: [12.008806639s] [12.008806639s] END Nov 25 10:32:03 crc kubenswrapper[4813]: I1125 10:32:03.899839 4813 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 10:32:03 crc kubenswrapper[4813]: E1125 10:32:03.901175 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:03.999971 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000035 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000065 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000091 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000119 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000145 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000167 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000190 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000215 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000250 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000274 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000295 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000320 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000310 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000339 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000354 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000380 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000395 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000413 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000429 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000445 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000461 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000477 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000492 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000507 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000499 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000514 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000523 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000557 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000602 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000634 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000660 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000712 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000760 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000789 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000813 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000835 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000856 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000881 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000912 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000933 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000954 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000979 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001000 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001020 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001043 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001066 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001089 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001116 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001137 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001158 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001179 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001202 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001225 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001248 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001269 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001294 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001319 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001339 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001359 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001382 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001403 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001424 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001464 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001507 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001541 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001565 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001590 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001612 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001633 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001655 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001677 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001721 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001743 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001766 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001788 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001815 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001836 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001857 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001878 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001911 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001935 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001958 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001981 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002004 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002026 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002048 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002146 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002172 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002195 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002219 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002240 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002263 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002286 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002308 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002330 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002354 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002376 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002399 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002422 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002444 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002465 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002489 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002518 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002555 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002587 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002611 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002638 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002668 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002718 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002742 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002767 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002791 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002813 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002835 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002857 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002878 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002900 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002922 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002947 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002970 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002995 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003017 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003040 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003063 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003085 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003177 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003202 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003227 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003252 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003275 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003301 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003325 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003349 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003382 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003405 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003428 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003451 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003474 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003499 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003521 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003545 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003579 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003616 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003661 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003863 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003893 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003918 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003943 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003967 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003993 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004015 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004042 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004067 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004090 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004113 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004137 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004160 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004184 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004207 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004231 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004253 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004275 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004300 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004322 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004346 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004369 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004392 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004418 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004440 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004463 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004487 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004510 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004533 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004558 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004582 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004610 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004642 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004700 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004757 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004790 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004824 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004852 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004875 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004902 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004931 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004963 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004995 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005027 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005061 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005084 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005109 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005133 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005157 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005181 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005204 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005230 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005254 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005278 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005330 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005362 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005388 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005415 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005440 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005466 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005494 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005518 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005547 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005605 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005630 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005726 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000708 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005790 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005809 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.009022 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.009063 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000710 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.009077 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.000856 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001100 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001113 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001223 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001200 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001388 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001449 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001466 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001718 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001732 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.001785 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002016 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002434 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.002458 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003214 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003292 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003496 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003639 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003772 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.003791 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004008 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004033 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.009300 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004213 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004390 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004503 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004673 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.004198 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.005697 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.006857 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.007013 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.007077 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.007265 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.007893 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.008274 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.008549 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.008847 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.008932 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.009164 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.009396 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.009548 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.009768 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.009795 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.010039 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.010068 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.010282 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.010526 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.010801 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.011019 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.011541 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.011645 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.012060 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.012329 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.012299 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.013129 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.013928 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.013949 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.014418 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.014702 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.017236 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.017712 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.018092 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.018573 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:04.518517081 +0000 UTC m=+21.648227037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.019174 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.019453 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.020165 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.020984 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.020565 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.020956 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.020958 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.021088 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.021336 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.021355 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.021955 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.022192 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.022566 4813 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.022609 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.022635 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.022919 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.022997 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.023312 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.023609 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.023670 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.023674 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.024098 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.023736 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.023758 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.023836 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.024575 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.024605 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.024882 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.025297 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.025382 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.025546 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.025578 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.025822 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.025948 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.026274 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.026287 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.026736 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.026786 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:04.526773006 +0000 UTC m=+21.656482892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.026800 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.027044 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.027116 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.027314 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.027322 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.027563 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.027710 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.027731 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.027820 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.028017 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.028034 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.028222 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.028349 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.028357 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.028544 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.028650 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.028740 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.028963 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.029199 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.029464 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.029848 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.030109 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.030348 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.030462 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.020982 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.030760 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.030779 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.030875 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.031014 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.031169 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.031325 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.031479 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.031884 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.032202 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.032344 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.032668 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.033016 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.033200 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.033235 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.033328 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.033508 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.033672 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.034036 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.034086 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.034264 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.034262 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.034411 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:32:04.534388024 +0000 UTC m=+21.664098010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.034509 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.034657 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.034692 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.034923 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.034939 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.034926 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.035153 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.035174 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.035275 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.035614 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.035629 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.035782 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.036102 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.037775 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.040096 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.040112 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.040177 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:04.540157612 +0000 UTC m=+21.669867488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.042834 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.042867 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.042886 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.042958 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:04.542925628 +0000 UTC m=+21.672635524 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.043232 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.043462 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.043970 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.045857 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.045900 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.046037 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.046308 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.046315 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.046422 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.046487 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.047344 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.047365 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.047569 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.052666 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.052801 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.052208 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.053611 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.053721 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.053967 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.054334 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.054423 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.054643 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.054819 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.054907 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.056046 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.056291 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.057184 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.057753 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.057984 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.058160 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.058203 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.058422 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.058982 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.059039 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.060115 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.060821 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.077763 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.078845 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.083966 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.085089 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109761 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109799 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109873 4813 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109887 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109897 4813 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109909 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109922 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109932 4813 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109943 4813 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109953 4813 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109963 4813 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109973 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109983 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.109993 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110004 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110015 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110026 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110037 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110047 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110058 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110070 4813 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110082 4813 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110093 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110103 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110114 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110160 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110171 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110182 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110194 4813 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110229 4813 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110241 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110255 4813 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110266 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110278 4813 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110289 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110300 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110311 4813 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110322 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110332 4813 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110343 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110354 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110364 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110375 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110387 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110398 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110410 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110422 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110435 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110446 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110456 4813 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110467 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110479 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110490 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110504 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110518 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110530 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110542 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110554 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110568 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110581 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110592 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110605 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110618 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110631 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110643 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110655 4813 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110673 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110707 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110718 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110731 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110744 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110757 4813 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110770 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110781 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110794 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110807 4813 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110820 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110831 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110844 4813 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110855 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110867 4813 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110880 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110895 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110907 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110920 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110933 4813 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110944 4813 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110954 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110964 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110963 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.110975 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111005 4813 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111018 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111022 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111030 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111049 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111064 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111077 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111090 4813 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111102 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111114 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111125 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111137 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111148 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111161 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111173 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111185 4813 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111195 4813 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111206 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111217 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111228 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111240 4813 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111253 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111264 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111274 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111287 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111299 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111310 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111323 4813 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111335 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111346 4813 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111389 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111401 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111412 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111424 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111438 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111450 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111462 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111476 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111487 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111498 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111510 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111521 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111532 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111544 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111555 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111567 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111578 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111590 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111601 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111613 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111625 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111641 4813 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111653 4813 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111665 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111676 4813 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111708 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111719 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111729 4813 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111740 4813 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111753 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111765 4813 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111776 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111787 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111799 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111810 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111821 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111834 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111844 4813 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111855 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111866 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111877 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111887 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111899 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111910 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111920 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111931 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111942 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111953 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111963 4813 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111973 4813 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111984 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.111995 4813 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112006 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112020 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112031 4813 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112043 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112054 4813 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112065 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112076 4813 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112087 4813 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112098 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112109 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112119 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112130 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112141 4813 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112152 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112163 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112175 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112187 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112199 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112211 4813 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112221 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.112233 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.285090 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.294414 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.300718 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 10:32:04 crc kubenswrapper[4813]: W1125 10:32:04.319912 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-d6101a9f706312719c7710a4991b6ba4c9abff49a956c851a1d24b0d0deba315 WatchSource:0}: Error finding container d6101a9f706312719c7710a4991b6ba4c9abff49a956c851a1d24b0d0deba315: Status 404 returned error can't find the container with id d6101a9f706312719c7710a4991b6ba4c9abff49a956c851a1d24b0d0deba315 Nov 25 10:32:04 crc kubenswrapper[4813]: W1125 10:32:04.320844 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-13ce2463782745b573c30f65059b1c4647495e6e19b43fb8b8bdd468658c1720 WatchSource:0}: Error finding container 13ce2463782745b573c30f65059b1c4647495e6e19b43fb8b8bdd468658c1720: Status 404 returned error can't find the container with id 13ce2463782745b573c30f65059b1c4647495e6e19b43fb8b8bdd468658c1720 Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.521900 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.522113 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.522215 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:05.522192782 +0000 UTC m=+22.651902728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.622829 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.622920 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.622959 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.622982 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.623109 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.623128 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.623141 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.623192 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:05.623175139 +0000 UTC m=+22.752885025 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.623875 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:32:05.623864467 +0000 UTC m=+22.753574353 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.623918 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.623947 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:05.623938769 +0000 UTC m=+22.753648655 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.624002 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.624017 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.624028 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:04 crc kubenswrapper[4813]: E1125 10:32:04.624052 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:05.624044561 +0000 UTC m=+22.753754447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.741513 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d"} Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.741563 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e03a74c34d48d3a22a5b0c12d0e6f383b38566f18faf8ff4ce1df4d3acfdc024"} Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.743470 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332"} Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.743510 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5"} Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.743523 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d6101a9f706312719c7710a4991b6ba4c9abff49a956c851a1d24b0d0deba315"} Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.744470 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"13ce2463782745b573c30f65059b1c4647495e6e19b43fb8b8bdd468658c1720"} Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.751406 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.760622 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.772897 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.785782 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.797314 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.811277 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.819580 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.831383 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.841919 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.850355 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.860848 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.870637 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.881427 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:04 crc kubenswrapper[4813]: I1125 10:32:04.894465 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 10:32:05 crc kubenswrapper[4813]: I1125 10:32:05.529863 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.530025 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.530118 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:07.530100522 +0000 UTC m=+24.659810408 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:05 crc kubenswrapper[4813]: I1125 10:32:05.621028 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:05 crc kubenswrapper[4813]: I1125 10:32:05.621075 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:05 crc kubenswrapper[4813]: I1125 10:32:05.621103 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.621152 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.621231 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.621311 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:05 crc kubenswrapper[4813]: I1125 10:32:05.624400 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 25 10:32:05 crc kubenswrapper[4813]: I1125 10:32:05.630650 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:05 crc kubenswrapper[4813]: I1125 10:32:05.630746 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:05 crc kubenswrapper[4813]: I1125 10:32:05.630771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.630811 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:32:07.63077574 +0000 UTC m=+24.760485636 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.631146 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.631175 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.631187 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:05 crc kubenswrapper[4813]: I1125 10:32:05.631234 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.631350 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.633415 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:07.633384326 +0000 UTC m=+24.763094212 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.631348 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.633495 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:07.633470578 +0000 UTC m=+24.763180464 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.633520 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.633540 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:05 crc kubenswrapper[4813]: E1125 10:32:05.633613 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:07.633598851 +0000 UTC m=+24.763308737 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:05 crc kubenswrapper[4813]: I1125 10:32:05.668377 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 25 10:32:05 crc kubenswrapper[4813]: I1125 10:32:05.893511 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.143487 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.144133 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.144642 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.145301 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.145922 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.146499 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.219891 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.220645 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.221367 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.221934 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.222478 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.223085 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.223602 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.224237 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.226201 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.226910 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.227604 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.228704 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.229278 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.229851 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.232281 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.251011 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.251804 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.830209 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.830724 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.831343 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.832009 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.832594 4813 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 25 10:32:06 crc kubenswrapper[4813]: I1125 10:32:06.832751 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.351379 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.352004 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.352601 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.353805 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.354459 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.354972 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.355651 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.356375 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.357607 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.358623 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.359571 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.360368 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.361011 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.361544 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.362094 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.362846 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.363339 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.363923 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.364450 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.365484 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.366142 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.366740 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.367290 4813 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.746s" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.367322 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-knhz8"] Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.367580 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mmh87"] Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.367745 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.367882 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mmh87" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.367882 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.367908 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-4s9w7"] Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.367992 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.367992 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.368861 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.368912 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rlpbx"] Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.368990 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.369096 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.369310 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.369513 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.369533 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8s5k7"] Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.371328 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.369593 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.373757 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.374027 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.374308 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.374432 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.374497 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.374317 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.374785 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.374500 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.374868 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.374941 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.375121 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.375217 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.375338 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.375393 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.375408 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.375496 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.377233 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.377335 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.377475 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.377524 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.377367 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.380584 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.381076 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.382201 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.389206 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.400113 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.413571 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.427607 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.441146 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.449974 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-var-lib-cni-bin\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450027 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-run-netns\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450053 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-system-cni-dir\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450078 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-os-release\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450101 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a2ac9045-f02f-4149-afa5-61da1452d547-cni-binary-copy\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450122 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-multus-cni-dir\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450140 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-multus-conf-dir\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450159 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ece7e9c-d49a-4348-98ec-bd6ab589f750-mcd-auth-proxy-config\") pod \"machine-config-daemon-knhz8\" (UID: \"8ece7e9c-d49a-4348-98ec-bd6ab589f750\") " pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-node-log\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450191 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-multus-socket-dir-parent\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450207 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-run-k8s-cni-cncf-io\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450222 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovnkube-script-lib\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450236 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-var-lib-cni-multus\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450253 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtc7m\" (UniqueName: \"kubernetes.io/projected/7bcb41f8-67f5-4a87-8b49-07da054e0c81-kube-api-access-xtc7m\") pod \"node-resolver-mmh87\" (UID: \"7bcb41f8-67f5-4a87-8b49-07da054e0c81\") " pod="openshift-dns/node-resolver-mmh87" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450266 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-etc-openvswitch\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450280 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-run-multus-certs\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450308 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55j7\" (UniqueName: \"kubernetes.io/projected/8ece7e9c-d49a-4348-98ec-bd6ab589f750-kube-api-access-j55j7\") pod \"machine-config-daemon-knhz8\" (UID: \"8ece7e9c-d49a-4348-98ec-bd6ab589f750\") " pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450321 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-log-socket\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450335 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovnkube-config\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450349 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a2ac9045-f02f-4149-afa5-61da1452d547-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450378 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a2ac9045-f02f-4149-afa5-61da1452d547-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450395 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8ece7e9c-d49a-4348-98ec-bd6ab589f750-proxy-tls\") pod \"machine-config-daemon-knhz8\" (UID: \"8ece7e9c-d49a-4348-98ec-bd6ab589f750\") " pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-ovn\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450425 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-env-overrides\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450440 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a2ac9045-f02f-4149-afa5-61da1452d547-os-release\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450461 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-run-ovn-kubernetes\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450474 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdxm5\" (UniqueName: \"kubernetes.io/projected/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-kube-api-access-mdxm5\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450502 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-var-lib-openvswitch\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450516 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-etc-kubernetes\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450532 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-slash\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450546 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-cni-bin\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450558 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-cni-binary-copy\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450572 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-var-lib-kubelet\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450587 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-systemd\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450601 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-openvswitch\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450621 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a2ac9045-f02f-4149-afa5-61da1452d547-system-cni-dir\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450634 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7bcb41f8-67f5-4a87-8b49-07da054e0c81-hosts-file\") pod \"node-resolver-mmh87\" (UID: \"7bcb41f8-67f5-4a87-8b49-07da054e0c81\") " pod="openshift-dns/node-resolver-mmh87" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450650 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450665 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-kubelet\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450698 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-cnibin\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450712 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-multus-daemon-config\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450725 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svkcf\" (UniqueName: \"kubernetes.io/projected/8460ec76-ba89-4f8f-9055-d7274ab52d11-kube-api-access-svkcf\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450739 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8ece7e9c-d49a-4348-98ec-bd6ab589f750-rootfs\") pod \"machine-config-daemon-knhz8\" (UID: \"8ece7e9c-d49a-4348-98ec-bd6ab589f750\") " pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450752 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-systemd-units\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450766 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovn-node-metrics-cert\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450780 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-run-netns\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-hostroot\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450811 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a2ac9045-f02f-4149-afa5-61da1452d547-cnibin\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450825 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgwgd\" (UniqueName: \"kubernetes.io/projected/a2ac9045-f02f-4149-afa5-61da1452d547-kube-api-access-fgwgd\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.450839 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-cni-netd\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.461315 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.473456 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.485433 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.495610 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.506800 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.517434 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.527357 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.538614 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.548211 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551323 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551368 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-cnibin\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551385 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-multus-daemon-config\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551406 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-kubelet\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551423 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8ece7e9c-d49a-4348-98ec-bd6ab589f750-rootfs\") pod \"machine-config-daemon-knhz8\" (UID: \"8ece7e9c-d49a-4348-98ec-bd6ab589f750\") " pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551437 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-systemd-units\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551450 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svkcf\" (UniqueName: \"kubernetes.io/projected/8460ec76-ba89-4f8f-9055-d7274ab52d11-kube-api-access-svkcf\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551442 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551498 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a2ac9045-f02f-4149-afa5-61da1452d547-cnibin\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551466 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a2ac9045-f02f-4149-afa5-61da1452d547-cnibin\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551533 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgwgd\" (UniqueName: \"kubernetes.io/projected/a2ac9045-f02f-4149-afa5-61da1452d547-kube-api-access-fgwgd\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551551 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-cni-netd\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551568 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovn-node-metrics-cert\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551583 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-run-netns\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551587 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-cnibin\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551596 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-hostroot\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551614 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-run-netns\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551629 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-system-cni-dir\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551643 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-os-release\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551658 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-var-lib-cni-bin\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-multus-conf-dir\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551704 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a2ac9045-f02f-4149-afa5-61da1452d547-cni-binary-copy\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551719 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-multus-cni-dir\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551733 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-node-log\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551749 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-multus-socket-dir-parent\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551765 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-run-k8s-cni-cncf-io\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551781 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ece7e9c-d49a-4348-98ec-bd6ab589f750-mcd-auth-proxy-config\") pod \"machine-config-daemon-knhz8\" (UID: \"8ece7e9c-d49a-4348-98ec-bd6ab589f750\") " pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551799 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtc7m\" (UniqueName: \"kubernetes.io/projected/7bcb41f8-67f5-4a87-8b49-07da054e0c81-kube-api-access-xtc7m\") pod \"node-resolver-mmh87\" (UID: \"7bcb41f8-67f5-4a87-8b49-07da054e0c81\") " pod="openshift-dns/node-resolver-mmh87" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551817 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-etc-openvswitch\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551833 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovnkube-script-lib\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551849 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-var-lib-cni-multus\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j55j7\" (UniqueName: \"kubernetes.io/projected/8ece7e9c-d49a-4348-98ec-bd6ab589f750-kube-api-access-j55j7\") pod \"machine-config-daemon-knhz8\" (UID: \"8ece7e9c-d49a-4348-98ec-bd6ab589f750\") " pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551882 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-log-socket\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551897 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-run-multus-certs\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551921 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a2ac9045-f02f-4149-afa5-61da1452d547-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551936 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a2ac9045-f02f-4149-afa5-61da1452d547-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551951 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8ece7e9c-d49a-4348-98ec-bd6ab589f750-proxy-tls\") pod \"machine-config-daemon-knhz8\" (UID: \"8ece7e9c-d49a-4348-98ec-bd6ab589f750\") " pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551978 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovnkube-config\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551996 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-ovn\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552011 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-env-overrides\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552026 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a2ac9045-f02f-4149-afa5-61da1452d547-os-release\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552042 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-run-ovn-kubernetes\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552057 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdxm5\" (UniqueName: \"kubernetes.io/projected/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-kube-api-access-mdxm5\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552073 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-var-lib-openvswitch\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552088 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-etc-kubernetes\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552104 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552127 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-cni-bin\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552142 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-cni-binary-copy\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552159 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-var-lib-kubelet\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552174 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-slash\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552189 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a2ac9045-f02f-4149-afa5-61da1452d547-system-cni-dir\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552204 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7bcb41f8-67f5-4a87-8b49-07da054e0c81-hosts-file\") pod \"node-resolver-mmh87\" (UID: \"7bcb41f8-67f5-4a87-8b49-07da054e0c81\") " pod="openshift-dns/node-resolver-mmh87" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552219 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-systemd\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552234 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-openvswitch\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552286 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-openvswitch\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552285 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-multus-daemon-config\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552333 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8ece7e9c-d49a-4348-98ec-bd6ab589f750-rootfs\") pod \"machine-config-daemon-knhz8\" (UID: \"8ece7e9c-d49a-4348-98ec-bd6ab589f750\") " pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552374 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-systemd-units\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552481 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-run-ovn-kubernetes\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552541 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a2ac9045-f02f-4149-afa5-61da1452d547-system-cni-dir\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552582 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-var-lib-kubelet\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-slash\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552625 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-systemd\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552645 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-etc-kubernetes\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552647 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7bcb41f8-67f5-4a87-8b49-07da054e0c81-hosts-file\") pod \"node-resolver-mmh87\" (UID: \"7bcb41f8-67f5-4a87-8b49-07da054e0c81\") " pod="openshift-dns/node-resolver-mmh87" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552707 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-var-lib-openvswitch\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552741 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-var-lib-cni-bin\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552837 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-ovn\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.551531 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-kubelet\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.552914 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-cni-netd\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553178 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-cni-binary-copy\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553179 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a2ac9045-f02f-4149-afa5-61da1452d547-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553241 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-log-socket\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553282 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-run-multus-certs\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553345 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovnkube-config\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553537 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a2ac9045-f02f-4149-afa5-61da1452d547-os-release\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.553549 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553573 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-run-netns\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-run-netns\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.553606 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:11.55358616 +0000 UTC m=+28.683296146 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553638 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-hostroot\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553656 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-env-overrides\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553731 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-multus-socket-dir-parent\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553808 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-multus-cni-dir\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553831 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-node-log\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553209 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-var-lib-cni-multus\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553862 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-cni-bin\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.553899 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-host-run-k8s-cni-cncf-io\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.554072 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a2ac9045-f02f-4149-afa5-61da1452d547-cni-binary-copy\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.554105 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-multus-conf-dir\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.554101 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-system-cni-dir\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.554140 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-etc-openvswitch\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.554150 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-os-release\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.554354 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ece7e9c-d49a-4348-98ec-bd6ab589f750-mcd-auth-proxy-config\") pod \"machine-config-daemon-knhz8\" (UID: \"8ece7e9c-d49a-4348-98ec-bd6ab589f750\") " pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.554499 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovnkube-script-lib\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.557848 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovn-node-metrics-cert\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.558057 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8ece7e9c-d49a-4348-98ec-bd6ab589f750-proxy-tls\") pod \"machine-config-daemon-knhz8\" (UID: \"8ece7e9c-d49a-4348-98ec-bd6ab589f750\") " pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.561786 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.569751 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svkcf\" (UniqueName: \"kubernetes.io/projected/8460ec76-ba89-4f8f-9055-d7274ab52d11-kube-api-access-svkcf\") pod \"ovnkube-node-8s5k7\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.575757 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j55j7\" (UniqueName: \"kubernetes.io/projected/8ece7e9c-d49a-4348-98ec-bd6ab589f750-kube-api-access-j55j7\") pod \"machine-config-daemon-knhz8\" (UID: \"8ece7e9c-d49a-4348-98ec-bd6ab589f750\") " pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.575857 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtc7m\" (UniqueName: \"kubernetes.io/projected/7bcb41f8-67f5-4a87-8b49-07da054e0c81-kube-api-access-xtc7m\") pod \"node-resolver-mmh87\" (UID: \"7bcb41f8-67f5-4a87-8b49-07da054e0c81\") " pod="openshift-dns/node-resolver-mmh87" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.577617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdxm5\" (UniqueName: \"kubernetes.io/projected/98439068-3c89-4c1b-8bb8-8aa848ef0cd3-kube-api-access-mdxm5\") pod \"multus-rlpbx\" (UID: \"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\") " pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.579641 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgwgd\" (UniqueName: \"kubernetes.io/projected/a2ac9045-f02f-4149-afa5-61da1452d547-kube-api-access-fgwgd\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.581516 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.594857 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.600142 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a2ac9045-f02f-4149-afa5-61da1452d547-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4s9w7\" (UID: \"a2ac9045-f02f-4149-afa5-61da1452d547\") " pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.607315 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.619286 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.635010 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.653458 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.653570 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.653652 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:32:11.653622062 +0000 UTC m=+28.783331948 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.653706 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.653726 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.653738 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.653771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.653780 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:11.653767736 +0000 UTC m=+28.783477622 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.653859 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.653938 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.653963 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.653984 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:11.653971121 +0000 UTC m=+28.783681007 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.653995 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.654007 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.654056 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:11.654035603 +0000 UTC m=+28.783745489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.687302 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mmh87" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.697324 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:32:07 crc kubenswrapper[4813]: W1125 10:32:07.697638 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bcb41f8_67f5_4a87_8b49_07da054e0c81.slice/crio-934e35eddd7863dd23062c2e933d94af834b3355845e08c9e1d8dc9f60252da0 WatchSource:0}: Error finding container 934e35eddd7863dd23062c2e933d94af834b3355845e08c9e1d8dc9f60252da0: Status 404 returned error can't find the container with id 934e35eddd7863dd23062c2e933d94af834b3355845e08c9e1d8dc9f60252da0 Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.706002 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" Nov 25 10:32:07 crc kubenswrapper[4813]: W1125 10:32:07.708317 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ece7e9c_d49a_4348_98ec_bd6ab589f750.slice/crio-0ef084dead578128a685304c10834fc2d18b7444371f09ddbf6c81b23b050c4a WatchSource:0}: Error finding container 0ef084dead578128a685304c10834fc2d18b7444371f09ddbf6c81b23b050c4a: Status 404 returned error can't find the container with id 0ef084dead578128a685304c10834fc2d18b7444371f09ddbf6c81b23b050c4a Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.716647 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.717192 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rlpbx" Nov 25 10:32:07 crc kubenswrapper[4813]: W1125 10:32:07.734739 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2ac9045_f02f_4149_afa5_61da1452d547.slice/crio-c92a351e7e8187d2b5acbd0b985987dbfdbe240be31240905ebe6af99ba167ba WatchSource:0}: Error finding container c92a351e7e8187d2b5acbd0b985987dbfdbe240be31240905ebe6af99ba167ba: Status 404 returned error can't find the container with id c92a351e7e8187d2b5acbd0b985987dbfdbe240be31240905ebe6af99ba167ba Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.751837 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" event={"ID":"a2ac9045-f02f-4149-afa5-61da1452d547","Type":"ContainerStarted","Data":"c92a351e7e8187d2b5acbd0b985987dbfdbe240be31240905ebe6af99ba167ba"} Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.753044 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15"} Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.754389 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerStarted","Data":"0ef084dead578128a685304c10834fc2d18b7444371f09ddbf6c81b23b050c4a"} Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.755338 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mmh87" event={"ID":"7bcb41f8-67f5-4a87-8b49-07da054e0c81","Type":"ContainerStarted","Data":"934e35eddd7863dd23062c2e933d94af834b3355845e08c9e1d8dc9f60252da0"} Nov 25 10:32:07 crc kubenswrapper[4813]: E1125 10:32:07.760699 4813 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:32:07 crc kubenswrapper[4813]: W1125 10:32:07.776902 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98439068_3c89_4c1b_8bb8_8aa848ef0cd3.slice/crio-c1b991cebf5a181ca2ffea65d1feefe4c57fb9383252e3480c6510c3cd1119f4 WatchSource:0}: Error finding container c1b991cebf5a181ca2ffea65d1feefe4c57fb9383252e3480c6510c3cd1119f4: Status 404 returned error can't find the container with id c1b991cebf5a181ca2ffea65d1feefe4c57fb9383252e3480c6510c3cd1119f4 Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.862527 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qltmc"] Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.862939 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qltmc" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.865232 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.865340 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.868580 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.869756 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.880980 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.892568 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.907058 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.916096 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.929194 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.939920 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.951005 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.956112 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7637b907-3ae7-4b15-a4b9-a0c2217384a1-serviceca\") pod \"node-ca-qltmc\" (UID: \"7637b907-3ae7-4b15-a4b9-a0c2217384a1\") " pod="openshift-image-registry/node-ca-qltmc" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.956147 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7637b907-3ae7-4b15-a4b9-a0c2217384a1-host\") pod \"node-ca-qltmc\" (UID: \"7637b907-3ae7-4b15-a4b9-a0c2217384a1\") " pod="openshift-image-registry/node-ca-qltmc" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.956178 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvsb9\" (UniqueName: \"kubernetes.io/projected/7637b907-3ae7-4b15-a4b9-a0c2217384a1-kube-api-access-qvsb9\") pod \"node-ca-qltmc\" (UID: \"7637b907-3ae7-4b15-a4b9-a0c2217384a1\") " pod="openshift-image-registry/node-ca-qltmc" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.965837 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.977841 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:07 crc kubenswrapper[4813]: I1125 10:32:07.989931 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.000984 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:07Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.016795 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.036900 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.047464 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.056973 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7637b907-3ae7-4b15-a4b9-a0c2217384a1-serviceca\") pod \"node-ca-qltmc\" (UID: \"7637b907-3ae7-4b15-a4b9-a0c2217384a1\") " pod="openshift-image-registry/node-ca-qltmc" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.057024 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7637b907-3ae7-4b15-a4b9-a0c2217384a1-host\") pod \"node-ca-qltmc\" (UID: \"7637b907-3ae7-4b15-a4b9-a0c2217384a1\") " pod="openshift-image-registry/node-ca-qltmc" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.057069 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvsb9\" (UniqueName: \"kubernetes.io/projected/7637b907-3ae7-4b15-a4b9-a0c2217384a1-kube-api-access-qvsb9\") pod \"node-ca-qltmc\" (UID: \"7637b907-3ae7-4b15-a4b9-a0c2217384a1\") " pod="openshift-image-registry/node-ca-qltmc" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.057190 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7637b907-3ae7-4b15-a4b9-a0c2217384a1-host\") pod \"node-ca-qltmc\" (UID: \"7637b907-3ae7-4b15-a4b9-a0c2217384a1\") " pod="openshift-image-registry/node-ca-qltmc" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.079477 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvsb9\" (UniqueName: \"kubernetes.io/projected/7637b907-3ae7-4b15-a4b9-a0c2217384a1-kube-api-access-qvsb9\") pod \"node-ca-qltmc\" (UID: \"7637b907-3ae7-4b15-a4b9-a0c2217384a1\") " pod="openshift-image-registry/node-ca-qltmc" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.145077 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7637b907-3ae7-4b15-a4b9-a0c2217384a1-serviceca\") pod \"node-ca-qltmc\" (UID: \"7637b907-3ae7-4b15-a4b9-a0c2217384a1\") " pod="openshift-image-registry/node-ca-qltmc" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.188841 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qltmc" Nov 25 10:32:08 crc kubenswrapper[4813]: W1125 10:32:08.202113 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7637b907_3ae7_4b15_a4b9_a0c2217384a1.slice/crio-39d845c1971e606b2a6d273c05463c21481c8e7a336f9379cf5fd95915c90d2b WatchSource:0}: Error finding container 39d845c1971e606b2a6d273c05463c21481c8e7a336f9379cf5fd95915c90d2b: Status 404 returned error can't find the container with id 39d845c1971e606b2a6d273c05463c21481c8e7a336f9379cf5fd95915c90d2b Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.620981 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.621012 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:08 crc kubenswrapper[4813]: E1125 10:32:08.621118 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:08 crc kubenswrapper[4813]: E1125 10:32:08.621327 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.760525 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mmh87" event={"ID":"7bcb41f8-67f5-4a87-8b49-07da054e0c81","Type":"ContainerStarted","Data":"8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479"} Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.762233 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qltmc" event={"ID":"7637b907-3ae7-4b15-a4b9-a0c2217384a1","Type":"ContainerStarted","Data":"713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc"} Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.762268 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qltmc" event={"ID":"7637b907-3ae7-4b15-a4b9-a0c2217384a1","Type":"ContainerStarted","Data":"39d845c1971e606b2a6d273c05463c21481c8e7a336f9379cf5fd95915c90d2b"} Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.763493 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rlpbx" event={"ID":"98439068-3c89-4c1b-8bb8-8aa848ef0cd3","Type":"ContainerStarted","Data":"73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e"} Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.763523 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rlpbx" event={"ID":"98439068-3c89-4c1b-8bb8-8aa848ef0cd3","Type":"ContainerStarted","Data":"c1b991cebf5a181ca2ffea65d1feefe4c57fb9383252e3480c6510c3cd1119f4"} Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.765003 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b" exitCode=0 Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.765067 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b"} Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.765086 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"94c2e058adc2b124baf2d5fc38723175acfb89906c9f5397e682f8bf1c617b0c"} Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.767922 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerStarted","Data":"b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac"} Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.767951 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerStarted","Data":"c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627"} Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.770025 4813 generic.go:334] "Generic (PLEG): container finished" podID="a2ac9045-f02f-4149-afa5-61da1452d547" containerID="792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f" exitCode=0 Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.770388 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" event={"ID":"a2ac9045-f02f-4149-afa5-61da1452d547","Type":"ContainerDied","Data":"792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f"} Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.777897 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.790742 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.806475 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.821600 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.846484 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.872030 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.891745 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.905000 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.920390 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.937536 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.950313 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.960106 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.974611 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:08 crc kubenswrapper[4813]: I1125 10:32:08.985789 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.000877 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:08Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.015869 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.030532 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.042918 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.067796 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.099469 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.129432 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.150972 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.171501 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.189727 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.207791 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.224845 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.239146 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.249973 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.621413 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:09 crc kubenswrapper[4813]: E1125 10:32:09.621907 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.776880 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937"} Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.776922 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e"} Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.776936 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789"} Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.776946 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6"} Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.776954 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d"} Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.776963 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f"} Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.778934 4813 generic.go:334] "Generic (PLEG): container finished" podID="a2ac9045-f02f-4149-afa5-61da1452d547" containerID="2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91" exitCode=0 Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.778962 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" event={"ID":"a2ac9045-f02f-4149-afa5-61da1452d547","Type":"ContainerDied","Data":"2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91"} Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.794339 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.810045 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.823877 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.838010 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.849494 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.859857 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.873332 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.887462 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.901739 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.915113 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.930866 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.948855 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.959940 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:09 crc kubenswrapper[4813]: I1125 10:32:09.971610 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:09Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.301396 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.303503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.303544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.303557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.303710 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.309987 4813 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.310289 4813 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.311720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.311759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.311769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.311785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.311797 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:10Z","lastTransitionTime":"2025-11-25T10:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:10 crc kubenswrapper[4813]: E1125 10:32:10.331139 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.334973 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.335026 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.335036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.335049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.335058 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:10Z","lastTransitionTime":"2025-11-25T10:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:10 crc kubenswrapper[4813]: E1125 10:32:10.349880 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.354226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.354262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.354273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.354289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.354301 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:10Z","lastTransitionTime":"2025-11-25T10:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:10 crc kubenswrapper[4813]: E1125 10:32:10.368227 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.373154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.373210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.373226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.373252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.373265 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:10Z","lastTransitionTime":"2025-11-25T10:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:10 crc kubenswrapper[4813]: E1125 10:32:10.387838 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.391935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.391987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.391999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.392020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.392033 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:10Z","lastTransitionTime":"2025-11-25T10:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:10 crc kubenswrapper[4813]: E1125 10:32:10.406187 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: E1125 10:32:10.406446 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.408650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.408750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.408782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.408809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.408828 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:10Z","lastTransitionTime":"2025-11-25T10:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.511939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.511993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.512007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.512028 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.512043 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:10Z","lastTransitionTime":"2025-11-25T10:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.614857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.614898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.614910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.614928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.614972 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:10Z","lastTransitionTime":"2025-11-25T10:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.621450 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.621463 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:10 crc kubenswrapper[4813]: E1125 10:32:10.621731 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:10 crc kubenswrapper[4813]: E1125 10:32:10.622266 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.717591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.717642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.717656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.717709 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.717726 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:10Z","lastTransitionTime":"2025-11-25T10:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.785550 4813 generic.go:334] "Generic (PLEG): container finished" podID="a2ac9045-f02f-4149-afa5-61da1452d547" containerID="00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5" exitCode=0 Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.785595 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" event={"ID":"a2ac9045-f02f-4149-afa5-61da1452d547","Type":"ContainerDied","Data":"00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5"} Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.804657 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.820632 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.822252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.822292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.822302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.822317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.822328 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:10Z","lastTransitionTime":"2025-11-25T10:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.837895 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.850828 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.871306 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.885079 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.896356 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.910307 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.926016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.926049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.926057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.926070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.926079 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:10Z","lastTransitionTime":"2025-11-25T10:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.928872 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.939559 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.950594 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.964272 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.977903 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:10 crc kubenswrapper[4813]: I1125 10:32:10.989974 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:10Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.027841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.027872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.027879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.027895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.027904 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:11Z","lastTransitionTime":"2025-11-25T10:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.131215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.131262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.131273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.131289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.131300 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:11Z","lastTransitionTime":"2025-11-25T10:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.233920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.233967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.233977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.233992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.234004 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:11Z","lastTransitionTime":"2025-11-25T10:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.336769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.336812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.336824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.336843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.336858 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:11Z","lastTransitionTime":"2025-11-25T10:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.439246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.439287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.439298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.439312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.439322 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:11Z","lastTransitionTime":"2025-11-25T10:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.542249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.542291 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.542302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.542318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.542330 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:11Z","lastTransitionTime":"2025-11-25T10:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.589701 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.589856 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.589950 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:19.589926321 +0000 UTC m=+36.719636227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.620695 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.620833 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.644298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.644333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.644343 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.644367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.644379 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:11Z","lastTransitionTime":"2025-11-25T10:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.691139 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.691300 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.691323 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:32:19.691300466 +0000 UTC m=+36.821010352 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.691365 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.691400 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.691409 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.691488 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:19.69146446 +0000 UTC m=+36.821174406 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.691525 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.691547 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.691574 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.691612 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:19.691601054 +0000 UTC m=+36.821310940 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.691613 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.691650 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.691666 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:11 crc kubenswrapper[4813]: E1125 10:32:11.691777 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:19.691755178 +0000 UTC m=+36.821465074 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.746550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.746588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.746599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.746616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.746629 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:11Z","lastTransitionTime":"2025-11-25T10:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.791708 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d"} Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.794357 4813 generic.go:334] "Generic (PLEG): container finished" podID="a2ac9045-f02f-4149-afa5-61da1452d547" containerID="156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5" exitCode=0 Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.794403 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" event={"ID":"a2ac9045-f02f-4149-afa5-61da1452d547","Type":"ContainerDied","Data":"156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5"} Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.808328 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.820181 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.835761 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.852247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.852294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.852306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.852322 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.852334 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:11Z","lastTransitionTime":"2025-11-25T10:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.859091 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.874968 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.888897 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.906486 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.924661 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.937566 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.948760 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.956517 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.956544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.956554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.956568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.956580 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:11Z","lastTransitionTime":"2025-11-25T10:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.961525 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.975311 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.984438 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:11 crc kubenswrapper[4813]: I1125 10:32:11.996112 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.058879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.059217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.059227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.059241 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.059252 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:12Z","lastTransitionTime":"2025-11-25T10:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.162091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.162128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.162138 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.162154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.162167 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:12Z","lastTransitionTime":"2025-11-25T10:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.265995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.266060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.266070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.266087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.266098 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:12Z","lastTransitionTime":"2025-11-25T10:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.368604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.368642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.368650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.368666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.368676 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:12Z","lastTransitionTime":"2025-11-25T10:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.471563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.471604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.471614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.471629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.471640 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:12Z","lastTransitionTime":"2025-11-25T10:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.574103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.574147 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.574155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.574171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.574180 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:12Z","lastTransitionTime":"2025-11-25T10:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.621117 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.621128 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:12 crc kubenswrapper[4813]: E1125 10:32:12.621316 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:12 crc kubenswrapper[4813]: E1125 10:32:12.621495 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.677144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.677211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.677230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.677253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.677270 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:12Z","lastTransitionTime":"2025-11-25T10:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.780014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.780050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.780059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.780072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.780081 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:12Z","lastTransitionTime":"2025-11-25T10:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.801592 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" event={"ID":"a2ac9045-f02f-4149-afa5-61da1452d547","Type":"ContainerStarted","Data":"8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022"} Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.815230 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.827766 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.839501 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.854298 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.867360 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.877257 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.881971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.882006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.882015 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.882029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.882039 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:12Z","lastTransitionTime":"2025-11-25T10:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.888608 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.897628 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.910664 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.922759 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.934866 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.946327 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.960136 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.979165 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:12Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.985011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.985054 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.985064 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.985078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:12 crc kubenswrapper[4813]: I1125 10:32:12.985089 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:12Z","lastTransitionTime":"2025-11-25T10:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.088104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.088156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.088169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.088186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.088204 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:13Z","lastTransitionTime":"2025-11-25T10:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.191482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.191601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.191623 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.191718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.191763 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:13Z","lastTransitionTime":"2025-11-25T10:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.295624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.295753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.295771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.295793 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.295804 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:13Z","lastTransitionTime":"2025-11-25T10:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.398435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.398481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.398495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.398509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.398522 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:13Z","lastTransitionTime":"2025-11-25T10:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.500324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.500366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.500378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.500393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.500404 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:13Z","lastTransitionTime":"2025-11-25T10:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.603236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.603279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.603289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.603302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.603311 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:13Z","lastTransitionTime":"2025-11-25T10:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.621546 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:13 crc kubenswrapper[4813]: E1125 10:32:13.621707 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.639441 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.654367 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.672559 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.687891 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.704956 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.704999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.705010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.705026 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.705036 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:13Z","lastTransitionTime":"2025-11-25T10:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.710190 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.728431 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.743438 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.754948 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.770246 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.785713 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.800411 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.806938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.807134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.807221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.807289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.807354 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:13Z","lastTransitionTime":"2025-11-25T10:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.807441 4813 generic.go:334] "Generic (PLEG): container finished" podID="a2ac9045-f02f-4149-afa5-61da1452d547" containerID="8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022" exitCode=0 Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.807495 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" event={"ID":"a2ac9045-f02f-4149-afa5-61da1452d547","Type":"ContainerDied","Data":"8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022"} Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.818016 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.833181 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.845078 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.854843 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.867141 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.880003 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.890158 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.902351 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.910043 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.910084 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.910094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.910107 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.910116 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:13Z","lastTransitionTime":"2025-11-25T10:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.913273 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.926895 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.940693 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.954808 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.966114 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.980910 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:13 crc kubenswrapper[4813]: I1125 10:32:13.997983 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.008865 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:14Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.012479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.012514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.012535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.012553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.012567 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:14Z","lastTransitionTime":"2025-11-25T10:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.021592 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:14Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.114495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.114531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.114540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.114555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.114564 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:14Z","lastTransitionTime":"2025-11-25T10:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.217133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.217168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.217176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.217190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.217199 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:14Z","lastTransitionTime":"2025-11-25T10:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.319243 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.319285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.319295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.319309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.319319 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:14Z","lastTransitionTime":"2025-11-25T10:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.421574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.421610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.421618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.421634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.421645 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:14Z","lastTransitionTime":"2025-11-25T10:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.524494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.524533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.524543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.524558 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.524568 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:14Z","lastTransitionTime":"2025-11-25T10:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.621191 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.621246 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:14 crc kubenswrapper[4813]: E1125 10:32:14.621311 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:14 crc kubenswrapper[4813]: E1125 10:32:14.621405 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.627073 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.627134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.627150 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.627170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.627181 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:14Z","lastTransitionTime":"2025-11-25T10:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.729967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.730005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.730019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.730035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.730046 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:14Z","lastTransitionTime":"2025-11-25T10:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.814573 4813 generic.go:334] "Generic (PLEG): container finished" podID="a2ac9045-f02f-4149-afa5-61da1452d547" containerID="345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a" exitCode=0 Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.814643 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" event={"ID":"a2ac9045-f02f-4149-afa5-61da1452d547","Type":"ContainerDied","Data":"345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a"} Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.818964 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5"} Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.819276 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.830052 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:14Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.832367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.832414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.832427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.832443 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.832803 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:14Z","lastTransitionTime":"2025-11-25T10:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.843624 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:14Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.860735 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:14Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.874704 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.876658 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:14Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.893296 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:14Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.920540 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:14Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.935647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.935744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.935757 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.935776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.935789 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:14Z","lastTransitionTime":"2025-11-25T10:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.942990 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:14Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.960037 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:14Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.970856 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:14Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:14 crc kubenswrapper[4813]: I1125 10:32:14.985569 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:14Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.002859 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.015219 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.035345 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.038023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.038078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.038088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.038109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.038123 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:15Z","lastTransitionTime":"2025-11-25T10:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.048782 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.064454 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.078077 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.095809 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.110927 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.122124 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.136502 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.140000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.140037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.140046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.140060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.140069 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:15Z","lastTransitionTime":"2025-11-25T10:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.148133 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.162659 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.177084 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.191847 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.202785 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.218336 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.236983 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.243482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.243522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.243531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.243546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.243555 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:15Z","lastTransitionTime":"2025-11-25T10:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.253661 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.346137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.346184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.346198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.346215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.346231 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:15Z","lastTransitionTime":"2025-11-25T10:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.449823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.449866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.449875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.449891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.449902 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:15Z","lastTransitionTime":"2025-11-25T10:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.552060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.552123 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.552140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.552165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.552185 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:15Z","lastTransitionTime":"2025-11-25T10:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.621241 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:15 crc kubenswrapper[4813]: E1125 10:32:15.621380 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.654166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.654209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.654220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.654238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.654251 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:15Z","lastTransitionTime":"2025-11-25T10:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.756710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.756767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.756779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.756795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.756807 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:15Z","lastTransitionTime":"2025-11-25T10:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.824878 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.825293 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" event={"ID":"a2ac9045-f02f-4149-afa5-61da1452d547","Type":"ContainerStarted","Data":"dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d"} Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.825589 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.840071 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.848387 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.859522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.859807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.860060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.860246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.860437 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:15Z","lastTransitionTime":"2025-11-25T10:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.861860 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.873142 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.884855 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.895845 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.911321 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.925494 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.939829 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.949748 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.959155 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.962292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.962329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.962338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.962353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.962363 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:15Z","lastTransitionTime":"2025-11-25T10:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.970083 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.980253 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:15 crc kubenswrapper[4813]: I1125 10:32:15.992622 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:15Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.002236 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.014420 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.025075 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.034914 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.045844 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.057803 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.064353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.064393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.064404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.064420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.064428 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:16Z","lastTransitionTime":"2025-11-25T10:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.071320 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.080040 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.090797 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.102706 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.115563 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.131100 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.152019 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.166502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.166540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.166555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.166575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.166588 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:16Z","lastTransitionTime":"2025-11-25T10:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.176905 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.197337 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.269564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.269643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.269670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.269742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.269766 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:16Z","lastTransitionTime":"2025-11-25T10:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.372447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.372499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.372510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.372528 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.372540 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:16Z","lastTransitionTime":"2025-11-25T10:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.428510 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.442335 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.456611 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.469185 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.474510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.474553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.474563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.474578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.474600 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:16Z","lastTransitionTime":"2025-11-25T10:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.481914 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.492549 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.506732 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.519861 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.536871 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.550473 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.563452 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.577047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.577104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.577116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.577132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.577142 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:16Z","lastTransitionTime":"2025-11-25T10:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.579893 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.591968 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.607212 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.619867 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:16Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.620979 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.621024 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:16 crc kubenswrapper[4813]: E1125 10:32:16.621087 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:16 crc kubenswrapper[4813]: E1125 10:32:16.621136 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.679605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.679665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.679695 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.679713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.679726 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:16Z","lastTransitionTime":"2025-11-25T10:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.782292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.782338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.782349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.782366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.782404 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:16Z","lastTransitionTime":"2025-11-25T10:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.827471 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.885401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.885444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.885452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.885465 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.885475 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:16Z","lastTransitionTime":"2025-11-25T10:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.987851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.987899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.987911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.987929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:16 crc kubenswrapper[4813]: I1125 10:32:16.987940 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:16Z","lastTransitionTime":"2025-11-25T10:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.090791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.090830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.090844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.090861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.090874 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:17Z","lastTransitionTime":"2025-11-25T10:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.193303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.193358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.193374 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.193399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.193416 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:17Z","lastTransitionTime":"2025-11-25T10:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.295320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.295477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.295492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.295515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.295529 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:17Z","lastTransitionTime":"2025-11-25T10:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.397929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.398011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.398020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.398033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.398082 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:17Z","lastTransitionTime":"2025-11-25T10:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.500972 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.501019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.501049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.501068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.501081 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:17Z","lastTransitionTime":"2025-11-25T10:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.512447 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj"] Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.512901 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.515444 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.516467 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.530551 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.543694 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.559389 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.570659 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.582458 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.595229 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.603423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.603465 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.603474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.603490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.603501 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:17Z","lastTransitionTime":"2025-11-25T10:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.606513 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.618899 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.620967 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:17 crc kubenswrapper[4813]: E1125 10:32:17.621134 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.631950 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.646719 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.647191 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8s86\" (UniqueName: \"kubernetes.io/projected/eccc6bcf-65c9-4741-a1d7-e5545661d3d6-kube-api-access-t8s86\") pod \"ovnkube-control-plane-749d76644c-sbzfj\" (UID: \"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.647240 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eccc6bcf-65c9-4741-a1d7-e5545661d3d6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sbzfj\" (UID: \"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.647282 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eccc6bcf-65c9-4741-a1d7-e5545661d3d6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sbzfj\" (UID: \"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.647312 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eccc6bcf-65c9-4741-a1d7-e5545661d3d6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sbzfj\" (UID: \"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.660302 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.672600 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.687964 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.706468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.706522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.706535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.706559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.706573 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:17Z","lastTransitionTime":"2025-11-25T10:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.710954 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.748397 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8s86\" (UniqueName: \"kubernetes.io/projected/eccc6bcf-65c9-4741-a1d7-e5545661d3d6-kube-api-access-t8s86\") pod \"ovnkube-control-plane-749d76644c-sbzfj\" (UID: \"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.748476 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eccc6bcf-65c9-4741-a1d7-e5545661d3d6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sbzfj\" (UID: \"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.749080 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eccc6bcf-65c9-4741-a1d7-e5545661d3d6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sbzfj\" (UID: \"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.749118 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eccc6bcf-65c9-4741-a1d7-e5545661d3d6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sbzfj\" (UID: \"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.749798 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eccc6bcf-65c9-4741-a1d7-e5545661d3d6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sbzfj\" (UID: \"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.749873 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eccc6bcf-65c9-4741-a1d7-e5545661d3d6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sbzfj\" (UID: \"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.753909 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:17Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.758490 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eccc6bcf-65c9-4741-a1d7-e5545661d3d6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sbzfj\" (UID: \"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.767731 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8s86\" (UniqueName: \"kubernetes.io/projected/eccc6bcf-65c9-4741-a1d7-e5545661d3d6-kube-api-access-t8s86\") pod \"ovnkube-control-plane-749d76644c-sbzfj\" (UID: \"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.810248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.810282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.810293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.810308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.810319 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:17Z","lastTransitionTime":"2025-11-25T10:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.830786 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.833804 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" Nov 25 10:32:17 crc kubenswrapper[4813]: W1125 10:32:17.853241 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeccc6bcf_65c9_4741_a1d7_e5545661d3d6.slice/crio-c661d2e88dfc7fa0a02d127a908d08e999fd55e7d9446a368b2f45521e371233 WatchSource:0}: Error finding container c661d2e88dfc7fa0a02d127a908d08e999fd55e7d9446a368b2f45521e371233: Status 404 returned error can't find the container with id c661d2e88dfc7fa0a02d127a908d08e999fd55e7d9446a368b2f45521e371233 Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.912637 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.912689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.912701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.912718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:17 crc kubenswrapper[4813]: I1125 10:32:17.912731 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:17Z","lastTransitionTime":"2025-11-25T10:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.014742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.014795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.014812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.014833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.014848 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:18Z","lastTransitionTime":"2025-11-25T10:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.116930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.116961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.116970 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.116983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.116992 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:18Z","lastTransitionTime":"2025-11-25T10:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.219281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.219323 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.219333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.219349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.219359 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:18Z","lastTransitionTime":"2025-11-25T10:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.329088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.329127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.329138 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.329155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.329166 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:18Z","lastTransitionTime":"2025-11-25T10:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.431544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.431583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.431594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.431610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.431621 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:18Z","lastTransitionTime":"2025-11-25T10:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.534397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.534430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.534441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.534455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.534465 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:18Z","lastTransitionTime":"2025-11-25T10:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.620609 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.620732 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:18 crc kubenswrapper[4813]: E1125 10:32:18.620784 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:18 crc kubenswrapper[4813]: E1125 10:32:18.620923 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.636787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.636853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.636877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.636908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.636925 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:18Z","lastTransitionTime":"2025-11-25T10:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.739389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.739420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.739429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.739449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.739458 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:18Z","lastTransitionTime":"2025-11-25T10:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.835121 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" event={"ID":"eccc6bcf-65c9-4741-a1d7-e5545661d3d6","Type":"ContainerStarted","Data":"75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.835177 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" event={"ID":"eccc6bcf-65c9-4741-a1d7-e5545661d3d6","Type":"ContainerStarted","Data":"bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.835198 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" event={"ID":"eccc6bcf-65c9-4741-a1d7-e5545661d3d6","Type":"ContainerStarted","Data":"c661d2e88dfc7fa0a02d127a908d08e999fd55e7d9446a368b2f45521e371233"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.836800 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/0.log" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.839529 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5" exitCode=1 Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.839561 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.840208 4813 scope.go:117] "RemoveContainer" containerID="d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.840864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.840889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.840900 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.840914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.840926 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:18Z","lastTransitionTime":"2025-11-25T10:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.850888 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:18Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.865291 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:18Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.876801 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:18Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.889594 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:18Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.901989 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:18Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.922661 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:18Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.943327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.943379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.943393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.943412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.943425 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:18Z","lastTransitionTime":"2025-11-25T10:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.944191 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:18Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.962956 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:18Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.975272 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:18Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:18 crc kubenswrapper[4813]: I1125 10:32:18.990616 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:18Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.001839 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:18Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.014205 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.027403 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.037261 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.046325 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.046353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.046361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.046375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.046387 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:19Z","lastTransitionTime":"2025-11-25T10:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.047487 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.059907 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.070710 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.083483 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.096905 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.114880 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"message\\\":\\\"kg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 10:32:17.537417 6097 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 10:32:17.537690 6097 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 10:32:17.537889 6097 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538045 6097 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538128 6097 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538254 6097 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538347 6097 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.539783 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 10:32:17.539844 6097 factory.go:656] Stopping watch factory\\\\nI1125 10:32:17.539865 6097 ovnkube.go:599] Stopped ovnkube\\\\nI1125 10:32:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.127712 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.140252 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.148230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.148282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.148293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.148309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.148321 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:19Z","lastTransitionTime":"2025-11-25T10:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.151855 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.165201 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.176978 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.189475 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.205857 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.224770 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.239948 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.249826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.249856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.249867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.249905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.249917 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:19Z","lastTransitionTime":"2025-11-25T10:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.262839 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.351935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.351972 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.351980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.351994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.352003 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:19Z","lastTransitionTime":"2025-11-25T10:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.454643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.454874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.454936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.455025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.455089 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:19Z","lastTransitionTime":"2025-11-25T10:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.557095 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.557384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.557466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.557557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.557654 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:19Z","lastTransitionTime":"2025-11-25T10:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.621269 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.621385 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.659784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.660022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.660129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.660212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.660287 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:19Z","lastTransitionTime":"2025-11-25T10:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.669268 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.669414 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.669502 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:35.669483474 +0000 UTC m=+52.799193370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.711243 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-w28xl"] Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.712134 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.712330 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.729149 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"message\\\":\\\"kg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 10:32:17.537417 6097 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 10:32:17.537690 6097 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 10:32:17.537889 6097 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538045 6097 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538128 6097 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538254 6097 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538347 6097 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.539783 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 10:32:17.539844 6097 factory.go:656] Stopping watch factory\\\\nI1125 10:32:17.539865 6097 ovnkube.go:599] Stopped ovnkube\\\\nI1125 10:32:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.739469 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.750476 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.762158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.762565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.762581 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.762675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.762916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.762930 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:19Z","lastTransitionTime":"2025-11-25T10:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.770125 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.770351 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:32:35.770306785 +0000 UTC m=+52.900016671 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.770523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.770556 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.770578 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.770645 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.770709 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.770727 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.770737 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.770747 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.770713 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:35.770705765 +0000 UTC m=+52.900415651 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.770777 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.770915 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.771000 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:35.770832948 +0000 UTC m=+52.900542834 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.771028 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:35.771017313 +0000 UTC m=+52.900727199 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.779463 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.791521 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.803207 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.815325 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.830489 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.840430 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.844458 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/0.log" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.847542 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e"} Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.847724 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.853055 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.865551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.865588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.865598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.865611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.865622 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:19Z","lastTransitionTime":"2025-11-25T10:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.868216 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.871149 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4dw8\" (UniqueName: \"kubernetes.io/projected/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-kube-api-access-n4dw8\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.871179 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.879906 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.894602 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.904895 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.917430 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.929885 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.945201 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.960481 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.968356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.968407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.968416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.968432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.968450 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:19Z","lastTransitionTime":"2025-11-25T10:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.972337 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4dw8\" (UniqueName: \"kubernetes.io/projected/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-kube-api-access-n4dw8\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.972419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.972600 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:19 crc kubenswrapper[4813]: E1125 10:32:19.972813 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs podName:74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:20.47278945 +0000 UTC m=+37.602499366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs") pod "network-metrics-daemon-w28xl" (UID: "74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.974921 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.987142 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:19 crc kubenswrapper[4813]: I1125 10:32:19.993127 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4dw8\" (UniqueName: \"kubernetes.io/projected/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-kube-api-access-n4dw8\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.000472 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.018992 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.029314 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.043806 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.070636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.070665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.070674 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.070712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.070722 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.118639 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"message\\\":\\\"kg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 10:32:17.537417 6097 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 10:32:17.537690 6097 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 10:32:17.537889 6097 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538045 6097 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538128 6097 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538254 6097 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538347 6097 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.539783 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 10:32:17.539844 6097 factory.go:656] Stopping watch factory\\\\nI1125 10:32:17.539865 6097 ovnkube.go:599] Stopped ovnkube\\\\nI1125 10:32:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.130188 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.145885 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.159779 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.173175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.173222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.173233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.173249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.173261 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.174450 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.186221 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.198805 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.276037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.276089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.276096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.276110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.276119 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.377602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.377646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.377658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.377672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.377711 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.469647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.469702 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.469717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.469740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.469752 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.477594 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:20 crc kubenswrapper[4813]: E1125 10:32:20.477759 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:20 crc kubenswrapper[4813]: E1125 10:32:20.477825 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs podName:74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:21.47780838 +0000 UTC m=+38.607518266 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs") pod "network-metrics-daemon-w28xl" (UID: "74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:20 crc kubenswrapper[4813]: E1125 10:32:20.483289 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.486595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.486624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.486632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.486645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.486656 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: E1125 10:32:20.500619 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.504828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.504866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.504876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.504891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.504901 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: E1125 10:32:20.517754 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.521135 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.521185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.521197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.521214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.521225 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: E1125 10:32:20.536264 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.539758 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.539827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.539839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.539857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.539897 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: E1125 10:32:20.555081 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: E1125 10:32:20.555256 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.556920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.556950 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.556961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.556975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.556987 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.621575 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.621571 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:20 crc kubenswrapper[4813]: E1125 10:32:20.621757 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:20 crc kubenswrapper[4813]: E1125 10:32:20.621915 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.658905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.658950 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.658961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.658976 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.658989 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.760868 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.760903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.760911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.760924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.760933 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.851827 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/1.log" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.852508 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/0.log" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.855196 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e" exitCode=1 Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.855254 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e"} Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.855918 4813 scope.go:117] "RemoveContainer" containerID="d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.857538 4813 scope.go:117] "RemoveContainer" containerID="0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e" Nov 25 10:32:20 crc kubenswrapper[4813]: E1125 10:32:20.857773 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.862621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.862656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.862667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.862712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.862731 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.871556 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.887019 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.897031 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.908458 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.921415 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.933468 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.945566 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.955676 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.964324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.964367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.964375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.964390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.964400 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:20Z","lastTransitionTime":"2025-11-25T10:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.966941 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.986803 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"message\\\":\\\"kg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 10:32:17.537417 6097 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 10:32:17.537690 6097 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 10:32:17.537889 6097 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538045 6097 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538128 6097 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538254 6097 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538347 6097 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.539783 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 10:32:17.539844 6097 factory.go:656] Stopping watch factory\\\\nI1125 10:32:17.539865 6097 ovnkube.go:599] Stopped ovnkube\\\\nI1125 10:32:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\" default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:19.622644 6322 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"58a148b3-0a7b-4412-b447-f87788c4883f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:20 crc kubenswrapper[4813]: I1125 10:32:20.997581 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:20Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.010671 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:21Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.026785 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:21Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.040113 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:21Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.051650 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:21Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.067671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.067739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.067749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.067766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.067777 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:21Z","lastTransitionTime":"2025-11-25T10:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.067628 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:21Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.169948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.169983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.169991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.170004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.170014 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:21Z","lastTransitionTime":"2025-11-25T10:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.272385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.272428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.272439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.272454 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.272465 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:21Z","lastTransitionTime":"2025-11-25T10:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.374444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.374491 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.374505 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.374522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.374535 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:21Z","lastTransitionTime":"2025-11-25T10:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.477220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.477250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.477258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.477270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.477279 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:21Z","lastTransitionTime":"2025-11-25T10:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.488900 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:21 crc kubenswrapper[4813]: E1125 10:32:21.489042 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:21 crc kubenswrapper[4813]: E1125 10:32:21.489085 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs podName:74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:23.489073232 +0000 UTC m=+40.618783118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs") pod "network-metrics-daemon-w28xl" (UID: "74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.579628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.579656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.579663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.579695 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.579705 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:21Z","lastTransitionTime":"2025-11-25T10:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.620924 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:21 crc kubenswrapper[4813]: E1125 10:32:21.621256 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.621209 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:21 crc kubenswrapper[4813]: E1125 10:32:21.621492 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.682601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.682649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.682659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.682672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.682703 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:21Z","lastTransitionTime":"2025-11-25T10:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.784768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.784842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.784904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.784936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.784958 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:21Z","lastTransitionTime":"2025-11-25T10:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.860804 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/1.log" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.887504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.887550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.887560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.887574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.887585 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:21Z","lastTransitionTime":"2025-11-25T10:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.990108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.990179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.990194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.990218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:21 crc kubenswrapper[4813]: I1125 10:32:21.990234 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:21Z","lastTransitionTime":"2025-11-25T10:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.093789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.093848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.093860 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.093876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.093891 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:22Z","lastTransitionTime":"2025-11-25T10:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.196920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.197002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.197014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.197043 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.197062 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:22Z","lastTransitionTime":"2025-11-25T10:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.299374 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.299433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.299445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.299465 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.299478 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:22Z","lastTransitionTime":"2025-11-25T10:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.401574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.401612 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.401622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.401635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.401648 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:22Z","lastTransitionTime":"2025-11-25T10:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.503610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.503650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.503659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.503675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.503703 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:22Z","lastTransitionTime":"2025-11-25T10:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.606738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.606780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.606788 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.606801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.606811 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:22Z","lastTransitionTime":"2025-11-25T10:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.621189 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.621242 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:22 crc kubenswrapper[4813]: E1125 10:32:22.621307 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:22 crc kubenswrapper[4813]: E1125 10:32:22.621365 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.708895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.708938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.708948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.708963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.708974 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:22Z","lastTransitionTime":"2025-11-25T10:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.811820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.811858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.811866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.811881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.811891 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:22Z","lastTransitionTime":"2025-11-25T10:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.914380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.914515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.914525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.914539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:22 crc kubenswrapper[4813]: I1125 10:32:22.914549 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:22Z","lastTransitionTime":"2025-11-25T10:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.017727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.017798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.017819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.017841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.017855 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:23Z","lastTransitionTime":"2025-11-25T10:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.119614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.119663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.119698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.119720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.119735 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:23Z","lastTransitionTime":"2025-11-25T10:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.222251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.222336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.222345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.222360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.222370 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:23Z","lastTransitionTime":"2025-11-25T10:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.325057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.325096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.325109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.325130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.325146 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:23Z","lastTransitionTime":"2025-11-25T10:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.426832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.426866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.426874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.426885 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.426894 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:23Z","lastTransitionTime":"2025-11-25T10:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.511816 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:23 crc kubenswrapper[4813]: E1125 10:32:23.511957 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:23 crc kubenswrapper[4813]: E1125 10:32:23.512017 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs podName:74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:27.511998567 +0000 UTC m=+44.641708453 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs") pod "network-metrics-daemon-w28xl" (UID: "74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.529577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.529611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.529621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.529634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.529643 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:23Z","lastTransitionTime":"2025-11-25T10:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.620428 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.620455 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:23 crc kubenswrapper[4813]: E1125 10:32:23.620542 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:23 crc kubenswrapper[4813]: E1125 10:32:23.620616 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.631510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.631554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.631565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.631579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.631592 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:23Z","lastTransitionTime":"2025-11-25T10:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.633755 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.642614 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.653308 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.669629 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.680827 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.690573 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.701723 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.712830 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.723406 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.733825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.733897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.733910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.733951 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.733964 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:23Z","lastTransitionTime":"2025-11-25T10:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.734152 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.747308 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.758908 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.774095 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.795792 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a344c8be2b2a24dbe8591e0e33824d415e5551de94478447927c20469a72a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"message\\\":\\\"kg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 10:32:17.537417 6097 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 10:32:17.537690 6097 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 10:32:17.537889 6097 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538045 6097 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538128 6097 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538254 6097 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.538347 6097 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 10:32:17.539783 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 10:32:17.539844 6097 factory.go:656] Stopping watch factory\\\\nI1125 10:32:17.539865 6097 ovnkube.go:599] Stopped ovnkube\\\\nI1125 10:32:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\" default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:19.622644 6322 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"58a148b3-0a7b-4412-b447-f87788c4883f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.807254 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.819550 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.836395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.836441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.836450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.836466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.836476 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:23Z","lastTransitionTime":"2025-11-25T10:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.938540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.938609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.938621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.938644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:23 crc kubenswrapper[4813]: I1125 10:32:23.938657 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:23Z","lastTransitionTime":"2025-11-25T10:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.041825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.041864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.041918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.041933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.041944 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:24Z","lastTransitionTime":"2025-11-25T10:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.146964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.147032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.147049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.147079 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.147102 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:24Z","lastTransitionTime":"2025-11-25T10:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.249373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.249425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.249434 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.249454 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.249475 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:24Z","lastTransitionTime":"2025-11-25T10:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.353351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.353399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.353408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.353427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.353436 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:24Z","lastTransitionTime":"2025-11-25T10:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.456411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.456451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.456462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.456477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.456486 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:24Z","lastTransitionTime":"2025-11-25T10:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.460881 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.461844 4813 scope.go:117] "RemoveContainer" containerID="0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e" Nov 25 10:32:24 crc kubenswrapper[4813]: E1125 10:32:24.462142 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.474996 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.486770 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.506922 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.520099 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.534039 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.548253 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.559213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.559262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.559271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.559287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.559296 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:24Z","lastTransitionTime":"2025-11-25T10:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.562430 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.576868 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.587539 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.597108 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.612580 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.621249 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.621248 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:24 crc kubenswrapper[4813]: E1125 10:32:24.621353 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:24 crc kubenswrapper[4813]: E1125 10:32:24.621406 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.628744 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\" default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:19.622644 6322 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"58a148b3-0a7b-4412-b447-f87788c4883f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.640893 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.652961 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.661836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.661895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.661914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.661938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.661954 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:24Z","lastTransitionTime":"2025-11-25T10:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.661857 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.671238 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:24Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.764338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.764376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.764384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.764396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.764405 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:24Z","lastTransitionTime":"2025-11-25T10:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.866971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.867042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.867059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.867082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.867099 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:24Z","lastTransitionTime":"2025-11-25T10:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.969552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.969595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.969605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.969621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:24 crc kubenswrapper[4813]: I1125 10:32:24.969633 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:24Z","lastTransitionTime":"2025-11-25T10:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.071778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.071846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.071868 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.071899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.071920 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:25Z","lastTransitionTime":"2025-11-25T10:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.174405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.174485 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.174508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.174543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.174564 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:25Z","lastTransitionTime":"2025-11-25T10:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.276912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.276946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.276955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.276971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.276981 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:25Z","lastTransitionTime":"2025-11-25T10:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.379428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.379488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.379504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.379523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.379534 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:25Z","lastTransitionTime":"2025-11-25T10:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.482975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.483033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.483051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.483078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.483095 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:25Z","lastTransitionTime":"2025-11-25T10:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.585105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.585174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.585187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.585204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.585215 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:25Z","lastTransitionTime":"2025-11-25T10:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.621420 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.621454 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:25 crc kubenswrapper[4813]: E1125 10:32:25.621598 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:25 crc kubenswrapper[4813]: E1125 10:32:25.621716 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.687553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.687599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.687610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.687630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.687642 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:25Z","lastTransitionTime":"2025-11-25T10:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.790969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.791017 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.791030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.791062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.791088 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:25Z","lastTransitionTime":"2025-11-25T10:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.893173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.893221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.893233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.893247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.893257 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:25Z","lastTransitionTime":"2025-11-25T10:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.995983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.996035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.996047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.996067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:25 crc kubenswrapper[4813]: I1125 10:32:25.996081 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:25Z","lastTransitionTime":"2025-11-25T10:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.098798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.098842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.098859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.098878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.098889 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:26Z","lastTransitionTime":"2025-11-25T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.201345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.201410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.201427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.201450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.201470 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:26Z","lastTransitionTime":"2025-11-25T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.304212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.304268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.304278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.304292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.304301 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:26Z","lastTransitionTime":"2025-11-25T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.407066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.407111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.407122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.407141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.407153 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:26Z","lastTransitionTime":"2025-11-25T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.509961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.510006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.510020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.510037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.510052 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:26Z","lastTransitionTime":"2025-11-25T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.612638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.612797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.612833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.612863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.612887 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:26Z","lastTransitionTime":"2025-11-25T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.621002 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.621042 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:26 crc kubenswrapper[4813]: E1125 10:32:26.621188 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:26 crc kubenswrapper[4813]: E1125 10:32:26.621334 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.719778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.719866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.719886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.719912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.719956 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:26Z","lastTransitionTime":"2025-11-25T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.823021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.823068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.823080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.823096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.823106 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:26Z","lastTransitionTime":"2025-11-25T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.926061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.926122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.926138 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.926163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:26 crc kubenswrapper[4813]: I1125 10:32:26.926180 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:26Z","lastTransitionTime":"2025-11-25T10:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.028654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.028718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.028731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.028747 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.028759 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:27Z","lastTransitionTime":"2025-11-25T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.132511 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.132575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.132591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.132615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.132632 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:27Z","lastTransitionTime":"2025-11-25T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.235042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.235099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.235108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.235123 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.235133 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:27Z","lastTransitionTime":"2025-11-25T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.337668 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.337759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.337770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.337807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.337821 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:27Z","lastTransitionTime":"2025-11-25T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.441133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.441185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.441210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.441231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.441245 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:27Z","lastTransitionTime":"2025-11-25T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.543093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.543134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.543146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.543163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.543175 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:27Z","lastTransitionTime":"2025-11-25T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.553053 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:27 crc kubenswrapper[4813]: E1125 10:32:27.553176 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:27 crc kubenswrapper[4813]: E1125 10:32:27.553454 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs podName:74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:35.553433976 +0000 UTC m=+52.683143872 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs") pod "network-metrics-daemon-w28xl" (UID: "74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.621648 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.621718 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:27 crc kubenswrapper[4813]: E1125 10:32:27.621856 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:27 crc kubenswrapper[4813]: E1125 10:32:27.621949 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.646216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.646283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.646304 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.646328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.646348 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:27Z","lastTransitionTime":"2025-11-25T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.749902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.750281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.750299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.750322 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.750339 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:27Z","lastTransitionTime":"2025-11-25T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.853293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.853615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.853729 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.853811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.853903 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:27Z","lastTransitionTime":"2025-11-25T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.955957 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.955991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.956002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.956045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:27 crc kubenswrapper[4813]: I1125 10:32:27.956057 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:27Z","lastTransitionTime":"2025-11-25T10:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.058124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.058198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.058221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.058266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.058300 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:28Z","lastTransitionTime":"2025-11-25T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.160403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.160772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.160865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.160953 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.161041 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:28Z","lastTransitionTime":"2025-11-25T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.262559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.262602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.262614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.262630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.262642 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:28Z","lastTransitionTime":"2025-11-25T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.365367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.365421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.365431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.365446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.365456 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:28Z","lastTransitionTime":"2025-11-25T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.468672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.468730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.468739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.468752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.468763 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:28Z","lastTransitionTime":"2025-11-25T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.570760 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.570810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.570826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.570848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.570867 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:28Z","lastTransitionTime":"2025-11-25T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.620847 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.620882 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:28 crc kubenswrapper[4813]: E1125 10:32:28.621073 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:28 crc kubenswrapper[4813]: E1125 10:32:28.621101 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.672947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.673000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.673018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.673041 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.673059 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:28Z","lastTransitionTime":"2025-11-25T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.777863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.777906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.777916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.777933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.777945 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:28Z","lastTransitionTime":"2025-11-25T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.880196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.880235 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.880245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.880277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.880288 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:28Z","lastTransitionTime":"2025-11-25T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.982501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.982537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.982546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.982559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:28 crc kubenswrapper[4813]: I1125 10:32:28.982570 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:28Z","lastTransitionTime":"2025-11-25T10:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.087704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.087749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.087786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.087805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.087817 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:29Z","lastTransitionTime":"2025-11-25T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.190645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.190719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.190735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.190757 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.190772 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:29Z","lastTransitionTime":"2025-11-25T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.293853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.293936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.293951 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.293973 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.293995 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:29Z","lastTransitionTime":"2025-11-25T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.396979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.397091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.397124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.397153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.397175 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:29Z","lastTransitionTime":"2025-11-25T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.499345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.499396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.499408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.499425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.499438 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:29Z","lastTransitionTime":"2025-11-25T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.601629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.601746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.601765 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.601791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.601808 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:29Z","lastTransitionTime":"2025-11-25T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.621148 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.621212 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:29 crc kubenswrapper[4813]: E1125 10:32:29.621295 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:29 crc kubenswrapper[4813]: E1125 10:32:29.621386 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.704213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.704252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.704261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.704279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.704290 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:29Z","lastTransitionTime":"2025-11-25T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.807480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.807546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.807564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.807605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.807623 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:29Z","lastTransitionTime":"2025-11-25T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.910155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.910220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.910237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.910261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:29 crc kubenswrapper[4813]: I1125 10:32:29.910279 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:29Z","lastTransitionTime":"2025-11-25T10:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.013624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.013781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.013800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.013824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.013842 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.116319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.116348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.116358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.116372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.116382 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.218529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.218579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.218588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.218603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.218616 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.321747 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.321789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.321798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.321813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.321824 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.423672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.423724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.423732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.423744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.423753 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.526282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.526317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.526335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.526351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.526362 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.559174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.559214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.559232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.559250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.559261 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: E1125 10:32:30.571405 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:30Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.574739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.574796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.574808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.574825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.574839 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: E1125 10:32:30.593188 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:30Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.596370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.596412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.596427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.596447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.596460 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: E1125 10:32:30.615214 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:30Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.619200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.619239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.619250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.619270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.619287 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.620407 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.620417 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:30 crc kubenswrapper[4813]: E1125 10:32:30.620506 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:30 crc kubenswrapper[4813]: E1125 10:32:30.620603 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:30 crc kubenswrapper[4813]: E1125 10:32:30.635050 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:30Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.638611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.638655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.638667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.638703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.638767 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: E1125 10:32:30.654552 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:30Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:30 crc kubenswrapper[4813]: E1125 10:32:30.654851 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.661159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.661210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.661225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.661239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.661248 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.763914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.763967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.763983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.764005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.764020 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.866703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.866743 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.866753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.866769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.866779 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.968596 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.968636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.968646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.968661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:30 crc kubenswrapper[4813]: I1125 10:32:30.968672 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:30Z","lastTransitionTime":"2025-11-25T10:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.071955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.071999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.072009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.072022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.072031 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:31Z","lastTransitionTime":"2025-11-25T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.174742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.174775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.174786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.174799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.174809 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:31Z","lastTransitionTime":"2025-11-25T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.277761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.277808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.277817 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.277834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.277844 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:31Z","lastTransitionTime":"2025-11-25T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.380576 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.380619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.380629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.380644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.380654 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:31Z","lastTransitionTime":"2025-11-25T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.482715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.482744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.482752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.482764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.482772 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:31Z","lastTransitionTime":"2025-11-25T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.585742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.585814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.585827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.585843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.585854 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:31Z","lastTransitionTime":"2025-11-25T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.621147 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.621194 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:31 crc kubenswrapper[4813]: E1125 10:32:31.621283 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:31 crc kubenswrapper[4813]: E1125 10:32:31.621339 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.688659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.688709 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.688717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.688730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.688740 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:31Z","lastTransitionTime":"2025-11-25T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.792083 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.792118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.792131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.792147 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.792158 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:31Z","lastTransitionTime":"2025-11-25T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.894708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.894765 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.894779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.894797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.894811 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:31Z","lastTransitionTime":"2025-11-25T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.996999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.997058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.997068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.997087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:31 crc kubenswrapper[4813]: I1125 10:32:31.997098 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:31Z","lastTransitionTime":"2025-11-25T10:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.098947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.099007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.099025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.099048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.099064 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:32Z","lastTransitionTime":"2025-11-25T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.201070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.201097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.201105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.201118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.201126 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:32Z","lastTransitionTime":"2025-11-25T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.303416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.303462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.303476 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.303499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.303514 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:32Z","lastTransitionTime":"2025-11-25T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.406291 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.406335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.406348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.406362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.406370 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:32Z","lastTransitionTime":"2025-11-25T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.509025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.509066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.509077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.509095 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.509108 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:32Z","lastTransitionTime":"2025-11-25T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.612004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.612058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.612066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.612081 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.612093 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:32Z","lastTransitionTime":"2025-11-25T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.620571 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.620594 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:32 crc kubenswrapper[4813]: E1125 10:32:32.620718 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:32 crc kubenswrapper[4813]: E1125 10:32:32.620842 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.713975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.714021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.714035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.714052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.714064 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:32Z","lastTransitionTime":"2025-11-25T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.816333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.816382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.816390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.816404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.816414 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:32Z","lastTransitionTime":"2025-11-25T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.918944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.918984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.918994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.919009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:32 crc kubenswrapper[4813]: I1125 10:32:32.919019 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:32Z","lastTransitionTime":"2025-11-25T10:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.021069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.021140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.021149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.021163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.021173 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:33Z","lastTransitionTime":"2025-11-25T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.123672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.123765 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.123775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.123795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.123807 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:33Z","lastTransitionTime":"2025-11-25T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.227118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.227175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.227190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.227212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.227228 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:33Z","lastTransitionTime":"2025-11-25T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.329954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.330007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.330022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.330042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.330057 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:33Z","lastTransitionTime":"2025-11-25T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.431814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.431855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.431867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.431883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.431895 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:33Z","lastTransitionTime":"2025-11-25T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.534381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.534440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.534451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.534468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.534477 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:33Z","lastTransitionTime":"2025-11-25T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.620450 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.620529 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:33 crc kubenswrapper[4813]: E1125 10:32:33.620588 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:33 crc kubenswrapper[4813]: E1125 10:32:33.620654 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.634161 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.636638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.636710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.636722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.636737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.636748 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:33Z","lastTransitionTime":"2025-11-25T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.650915 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.662861 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.673890 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.686318 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.696590 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.712493 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.721883 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.732992 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.738860 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.738898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.738909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.738925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.738935 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:33Z","lastTransitionTime":"2025-11-25T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.750627 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\" default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:19.622644 6322 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"58a148b3-0a7b-4412-b447-f87788c4883f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.760217 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.770035 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.781211 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.794001 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.804797 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.818805 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:33Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.844066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.844112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.844124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.844139 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.844151 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:33Z","lastTransitionTime":"2025-11-25T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.946737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.946790 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.946802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.946820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:33 crc kubenswrapper[4813]: I1125 10:32:33.946834 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:33Z","lastTransitionTime":"2025-11-25T10:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.050608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.050655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.050664 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.050696 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.050706 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:34Z","lastTransitionTime":"2025-11-25T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.152962 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.153015 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.153031 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.153057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.153121 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:34Z","lastTransitionTime":"2025-11-25T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.255108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.255152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.255164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.255181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.255199 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:34Z","lastTransitionTime":"2025-11-25T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.358055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.358112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.358128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.358155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.358208 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:34Z","lastTransitionTime":"2025-11-25T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.460812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.460860 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.460873 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.460890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.460903 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:34Z","lastTransitionTime":"2025-11-25T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.561450 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.563489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.563544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.563557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.563570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.563582 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:34Z","lastTransitionTime":"2025-11-25T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.574374 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.584536 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.596577 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.611031 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.621318 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.621323 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:34 crc kubenswrapper[4813]: E1125 10:32:34.621498 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:34 crc kubenswrapper[4813]: E1125 10:32:34.621546 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.622820 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.636403 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.649593 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.660509 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.665731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.665766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.665780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.665798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.665810 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:34Z","lastTransitionTime":"2025-11-25T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.673020 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.687047 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.708181 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\" default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:19.622644 6322 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"58a148b3-0a7b-4412-b447-f87788c4883f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.720594 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.739004 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.752200 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.764536 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.768182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.768219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.768229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.768245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.768257 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:34Z","lastTransitionTime":"2025-11-25T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.777162 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.790905 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:34Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.870442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.870502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.870515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.870531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.870542 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:34Z","lastTransitionTime":"2025-11-25T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.973233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.973288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.973296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.973308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:34 crc kubenswrapper[4813]: I1125 10:32:34.973317 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:34Z","lastTransitionTime":"2025-11-25T10:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.079370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.079431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.079447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.079468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.079558 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:35Z","lastTransitionTime":"2025-11-25T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.183355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.183397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.183407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.183425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.183437 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:35Z","lastTransitionTime":"2025-11-25T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.286256 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.286320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.286333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.286358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.286373 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:35Z","lastTransitionTime":"2025-11-25T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.389440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.389489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.389500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.389518 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.389530 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:35Z","lastTransitionTime":"2025-11-25T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.492360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.492426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.492470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.492496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.492511 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:35Z","lastTransitionTime":"2025-11-25T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.594792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.594837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.594849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.594865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.594874 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:35Z","lastTransitionTime":"2025-11-25T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.620621 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.620801 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.620884 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.621376 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.622147 4813 scope.go:117] "RemoveContainer" containerID="0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.632661 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.632798 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.632850 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs podName:74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2 nodeName:}" failed. No retries permitted until 2025-11-25 10:32:51.632834033 +0000 UTC m=+68.762543919 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs") pod "network-metrics-daemon-w28xl" (UID: "74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.697819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.697860 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.697873 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.697892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.697905 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:35Z","lastTransitionTime":"2025-11-25T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.733599 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.733847 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.733920 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:33:07.733901781 +0000 UTC m=+84.863611667 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.799444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.799469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.799480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.799494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.799503 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:35Z","lastTransitionTime":"2025-11-25T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.834818 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.834922 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.834959 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.834997 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:33:07.834964889 +0000 UTC m=+84.964674785 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.835058 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.835078 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.835158 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.835181 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.835182 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.835214 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.835229 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 10:33:07.835218745 +0000 UTC m=+84.964928651 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.835229 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.835107 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.835287 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:33:07.835273967 +0000 UTC m=+84.964983853 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:32:35 crc kubenswrapper[4813]: E1125 10:32:35.835303 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 10:33:07.835295587 +0000 UTC m=+84.965005473 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.901603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.901634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.901648 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.901660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.901669 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:35Z","lastTransitionTime":"2025-11-25T10:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.904013 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/1.log" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.906107 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf"} Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.906970 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.919832 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:35Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.936607 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\" default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:19.622644 6322 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"58a148b3-0a7b-4412-b447-f87788c4883f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:35Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.954610 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:35Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.969914 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:35Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:35 crc kubenswrapper[4813]: I1125 10:32:35.990830 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:35Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.003966 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.004244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.004320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.004381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.004444 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:36Z","lastTransitionTime":"2025-11-25T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.010997 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.030362 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.046871 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.059856 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.068908 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.078537 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.088986 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.099356 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.110015 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.110074 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.110089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.110110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.110122 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:36Z","lastTransitionTime":"2025-11-25T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.110913 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.121983 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.133440 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.144389 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.213571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.213624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.213637 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.213655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.213667 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:36Z","lastTransitionTime":"2025-11-25T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.316189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.316241 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.316251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.316269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.316281 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:36Z","lastTransitionTime":"2025-11-25T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.418960 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.418997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.419035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.419052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.419061 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:36Z","lastTransitionTime":"2025-11-25T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.520986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.521054 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.521064 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.521078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.521087 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:36Z","lastTransitionTime":"2025-11-25T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.620650 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.620707 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:36 crc kubenswrapper[4813]: E1125 10:32:36.620860 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:36 crc kubenswrapper[4813]: E1125 10:32:36.620981 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.623139 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.623182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.623192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.623208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.623218 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:36Z","lastTransitionTime":"2025-11-25T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.725569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.725608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.725617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.725631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.725642 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:36Z","lastTransitionTime":"2025-11-25T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.828361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.828408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.828418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.828432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.828441 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:36Z","lastTransitionTime":"2025-11-25T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.911556 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/2.log" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.912215 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/1.log" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.914850 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf" exitCode=1 Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.914892 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf"} Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.914924 4813 scope.go:117] "RemoveContainer" containerID="0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.915807 4813 scope.go:117] "RemoveContainer" containerID="0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf" Nov 25 10:32:36 crc kubenswrapper[4813]: E1125 10:32:36.916032 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.930659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.930715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.930730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.930744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.930754 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:36Z","lastTransitionTime":"2025-11-25T10:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.930911 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.942110 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.955795 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:36 crc kubenswrapper[4813]: I1125 10:32:36.964879 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.000083 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.017650 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.030010 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.032635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.032657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.032667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.032693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.032704 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:37Z","lastTransitionTime":"2025-11-25T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.043313 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.054359 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.070319 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.089810 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e66dd83e85e97c04906e16d68e4fa2de6af1eeb8595d8fd6fd8beae180f2b8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:20Z\\\",\\\"message\\\":\\\" default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:19Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:19.622644 6322 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"58a148b3-0a7b-4412-b447-f87788c4883f\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:36Z\\\",\\\"message\\\":\\\"[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 10:32:36.408644 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:36.408623 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.100809 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.116035 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.129139 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.135061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.135105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.135116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.135132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.135141 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:37Z","lastTransitionTime":"2025-11-25T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.140498 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.152869 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.165967 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.237823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.237872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.237883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.237898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.237909 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:37Z","lastTransitionTime":"2025-11-25T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.341035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.341103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.341114 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.341134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.341147 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:37Z","lastTransitionTime":"2025-11-25T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.443936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.443973 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.443981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.443994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.444004 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:37Z","lastTransitionTime":"2025-11-25T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.548906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.549035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.549060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.549098 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.549131 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:37Z","lastTransitionTime":"2025-11-25T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.621159 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.621299 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:37 crc kubenswrapper[4813]: E1125 10:32:37.621326 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:37 crc kubenswrapper[4813]: E1125 10:32:37.621516 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.652090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.652157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.652172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.652191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.652203 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:37Z","lastTransitionTime":"2025-11-25T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.754436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.754502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.754512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.754527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.754536 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:37Z","lastTransitionTime":"2025-11-25T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.856550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.856594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.856618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.856645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.856661 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:37Z","lastTransitionTime":"2025-11-25T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.921494 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/2.log" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.925272 4813 scope.go:117] "RemoveContainer" containerID="0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf" Nov 25 10:32:37 crc kubenswrapper[4813]: E1125 10:32:37.925455 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.937621 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.950219 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.958425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.958460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.958468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.958480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.958489 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:37Z","lastTransitionTime":"2025-11-25T10:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.968544 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:36Z\\\",\\\"message\\\":\\\"[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 10:32:36.408644 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:36.408623 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.978553 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.988010 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:37 crc kubenswrapper[4813]: I1125 10:32:37.999343 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:37Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.010924 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:38Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.025232 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:38Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.039557 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:38Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.049482 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:38Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.060250 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:38Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.061364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.061424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.061434 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.061451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.061461 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:38Z","lastTransitionTime":"2025-11-25T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.070664 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:38Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.081357 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:38Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.092743 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:38Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.104743 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:38Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.115102 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:38Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.126968 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:38Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.164183 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.164233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.164247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.164264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.164276 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:38Z","lastTransitionTime":"2025-11-25T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.266954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.266986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.266997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.267012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.267022 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:38Z","lastTransitionTime":"2025-11-25T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.369552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.369621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.369634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.369648 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.369661 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:38Z","lastTransitionTime":"2025-11-25T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.472497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.472545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.472557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.472576 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.472597 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:38Z","lastTransitionTime":"2025-11-25T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.574953 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.575028 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.575042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.575059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.575072 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:38Z","lastTransitionTime":"2025-11-25T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.620964 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.621018 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:38 crc kubenswrapper[4813]: E1125 10:32:38.621104 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:38 crc kubenswrapper[4813]: E1125 10:32:38.621184 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.680888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.681149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.681245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.681339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.681425 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:38Z","lastTransitionTime":"2025-11-25T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.785020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.785305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.785435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.785526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.785604 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:38Z","lastTransitionTime":"2025-11-25T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.888109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.888150 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.888162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.888178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.888189 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:38Z","lastTransitionTime":"2025-11-25T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.990051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.990289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.990359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.990466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:38 crc kubenswrapper[4813]: I1125 10:32:38.990537 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:38Z","lastTransitionTime":"2025-11-25T10:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.094076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.094142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.094158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.094185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.094201 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:39Z","lastTransitionTime":"2025-11-25T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.196764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.196822 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.196838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.196861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.196876 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:39Z","lastTransitionTime":"2025-11-25T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.298959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.299001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.299014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.299056 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.299070 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:39Z","lastTransitionTime":"2025-11-25T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.402816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.402854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.402873 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.402890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.402901 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:39Z","lastTransitionTime":"2025-11-25T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.505637 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.505991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.506007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.506027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.506039 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:39Z","lastTransitionTime":"2025-11-25T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.609020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.609070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.609080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.609093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.609103 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:39Z","lastTransitionTime":"2025-11-25T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.621424 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.621467 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:39 crc kubenswrapper[4813]: E1125 10:32:39.621520 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:39 crc kubenswrapper[4813]: E1125 10:32:39.621584 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.711879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.711927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.711937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.711952 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.711965 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:39Z","lastTransitionTime":"2025-11-25T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.814191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.814449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.814544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.814625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.814696 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:39Z","lastTransitionTime":"2025-11-25T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.916740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.916797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.916813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.916831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:39 crc kubenswrapper[4813]: I1125 10:32:39.916842 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:39Z","lastTransitionTime":"2025-11-25T10:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.020188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.020276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.020299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.020330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.020351 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.122879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.122913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.122922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.122936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.122946 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.225226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.225287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.225311 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.225341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.225362 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.327969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.328277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.328371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.328492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.328582 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.431770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.431820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.431832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.431851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.431869 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.534546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.534606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.534615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.534630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.534640 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.621157 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:40 crc kubenswrapper[4813]: E1125 10:32:40.621291 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.621189 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:40 crc kubenswrapper[4813]: E1125 10:32:40.621572 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.637755 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.637803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.637814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.637830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.637841 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.741022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.741095 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.741117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.741142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.741160 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.746826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.746892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.746911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.746941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.746959 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: E1125 10:32:40.767518 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:40Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.772501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.772542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.772556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.772577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.772591 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: E1125 10:32:40.786903 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:40Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.792725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.793003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.793114 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.793194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.793275 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: E1125 10:32:40.806008 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:40Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.810313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.810372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.810393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.810417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.810434 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: E1125 10:32:40.824606 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:40Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.829075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.829129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.829141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.829159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.829174 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: E1125 10:32:40.840475 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:40Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:40 crc kubenswrapper[4813]: E1125 10:32:40.840619 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.844118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.844160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.844170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.844185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.844200 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.947224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.947708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.947835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.947959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:40 crc kubenswrapper[4813]: I1125 10:32:40.948039 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:40Z","lastTransitionTime":"2025-11-25T10:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.050746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.050788 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.050797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.050811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.050820 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:41Z","lastTransitionTime":"2025-11-25T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.153619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.153661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.153673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.153718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.153735 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:41Z","lastTransitionTime":"2025-11-25T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.256944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.257025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.257048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.257076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.257109 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:41Z","lastTransitionTime":"2025-11-25T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.359359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.359404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.359412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.359426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.359455 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:41Z","lastTransitionTime":"2025-11-25T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.461868 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.461936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.461959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.461980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.461996 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:41Z","lastTransitionTime":"2025-11-25T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.564165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.564281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.564298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.564323 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.564339 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:41Z","lastTransitionTime":"2025-11-25T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.621061 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.621108 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:41 crc kubenswrapper[4813]: E1125 10:32:41.621261 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:41 crc kubenswrapper[4813]: E1125 10:32:41.621328 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.667455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.667497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.667507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.667522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.667553 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:41Z","lastTransitionTime":"2025-11-25T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.770445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.770496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.770508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.770521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.770533 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:41Z","lastTransitionTime":"2025-11-25T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.872976 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.873033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.873045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.873063 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.873074 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:41Z","lastTransitionTime":"2025-11-25T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.975670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.975744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.975759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.975778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:41 crc kubenswrapper[4813]: I1125 10:32:41.975791 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:41Z","lastTransitionTime":"2025-11-25T10:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.078410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.079245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.079318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.079348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.079375 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:42Z","lastTransitionTime":"2025-11-25T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.182298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.182370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.182383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.182407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.182425 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:42Z","lastTransitionTime":"2025-11-25T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.285247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.285311 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.285323 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.285343 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.285357 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:42Z","lastTransitionTime":"2025-11-25T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.388610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.388697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.388715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.388737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.388756 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:42Z","lastTransitionTime":"2025-11-25T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.491533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.491594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.491611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.491635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.491652 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:42Z","lastTransitionTime":"2025-11-25T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.594318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.594359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.594368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.594380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.594389 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:42Z","lastTransitionTime":"2025-11-25T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.621272 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.621400 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:42 crc kubenswrapper[4813]: E1125 10:32:42.621458 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:42 crc kubenswrapper[4813]: E1125 10:32:42.621586 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.696465 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.696517 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.696529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.696545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.696556 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:42Z","lastTransitionTime":"2025-11-25T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.800070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.800126 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.800137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.800160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.800174 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:42Z","lastTransitionTime":"2025-11-25T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.902776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.903032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.903046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.903073 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:42 crc kubenswrapper[4813]: I1125 10:32:42.903090 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:42Z","lastTransitionTime":"2025-11-25T10:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.006223 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.006288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.006298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.006313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.006323 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:43Z","lastTransitionTime":"2025-11-25T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.108807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.108885 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.108893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.108909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.108919 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:43Z","lastTransitionTime":"2025-11-25T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.211149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.211199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.211215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.211238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.211251 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:43Z","lastTransitionTime":"2025-11-25T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.313895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.313971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.313995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.314024 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.314044 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:43Z","lastTransitionTime":"2025-11-25T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.417096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.417157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.417168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.417182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.417193 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:43Z","lastTransitionTime":"2025-11-25T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.519554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.519608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.519617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.519631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.519639 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:43Z","lastTransitionTime":"2025-11-25T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.620508 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.620563 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:43 crc kubenswrapper[4813]: E1125 10:32:43.620665 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:43 crc kubenswrapper[4813]: E1125 10:32:43.621035 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.621289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.621464 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.621478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.621496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.621507 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:43Z","lastTransitionTime":"2025-11-25T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.637096 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.664303 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:36Z\\\",\\\"message\\\":\\\"[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 10:32:36.408644 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:36.408623 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.674241 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.685651 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.698231 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.711386 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.723769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.723830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.723842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.723871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.723884 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:43Z","lastTransitionTime":"2025-11-25T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.728261 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.740040 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.751360 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.759593 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.769827 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.779888 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.790646 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.800603 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.810806 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.822115 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.825796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.825823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.825832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.825847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.825857 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:43Z","lastTransitionTime":"2025-11-25T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.831016 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:43Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.928285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.928329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.928340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.928358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:43 crc kubenswrapper[4813]: I1125 10:32:43.928370 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:43Z","lastTransitionTime":"2025-11-25T10:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.031023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.031052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.031063 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.031077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.031089 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:44Z","lastTransitionTime":"2025-11-25T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.133369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.133402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.133412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.133424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.133433 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:44Z","lastTransitionTime":"2025-11-25T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.236250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.236341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.236360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.236383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.236403 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:44Z","lastTransitionTime":"2025-11-25T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.339797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.339851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.339867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.339891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.339910 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:44Z","lastTransitionTime":"2025-11-25T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.442537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.442590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.442608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.442627 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.442640 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:44Z","lastTransitionTime":"2025-11-25T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.545254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.545301 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.545314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.545330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.545341 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:44Z","lastTransitionTime":"2025-11-25T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.621045 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.621061 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:44 crc kubenswrapper[4813]: E1125 10:32:44.621176 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:44 crc kubenswrapper[4813]: E1125 10:32:44.621250 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.647731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.647773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.647787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.647804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.647814 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:44Z","lastTransitionTime":"2025-11-25T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.749835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.749864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.749872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.749884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.749892 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:44Z","lastTransitionTime":"2025-11-25T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.851979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.852386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.852402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.852418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.852427 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:44Z","lastTransitionTime":"2025-11-25T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.955100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.955352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.955428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.955495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:44 crc kubenswrapper[4813]: I1125 10:32:44.955556 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:44Z","lastTransitionTime":"2025-11-25T10:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.057604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.057645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.057655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.057669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.057690 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:45Z","lastTransitionTime":"2025-11-25T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.160212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.160255 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.160267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.160282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.160293 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:45Z","lastTransitionTime":"2025-11-25T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.263095 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.263140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.263155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.263175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.263191 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:45Z","lastTransitionTime":"2025-11-25T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.365838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.365881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.365892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.365912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.365925 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:45Z","lastTransitionTime":"2025-11-25T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.468921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.468970 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.468986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.469001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.469018 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:45Z","lastTransitionTime":"2025-11-25T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.570954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.571003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.571013 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.571029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.571040 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:45Z","lastTransitionTime":"2025-11-25T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.621361 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.621513 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:45 crc kubenswrapper[4813]: E1125 10:32:45.621594 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:45 crc kubenswrapper[4813]: E1125 10:32:45.621728 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.674583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.674616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.674623 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.674638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.674651 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:45Z","lastTransitionTime":"2025-11-25T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.777661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.777725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.777738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.777782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.777800 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:45Z","lastTransitionTime":"2025-11-25T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.879941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.879975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.879985 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.879999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.880007 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:45Z","lastTransitionTime":"2025-11-25T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.982556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.982596 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.982607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.982624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:45 crc kubenswrapper[4813]: I1125 10:32:45.982638 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:45Z","lastTransitionTime":"2025-11-25T10:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.084939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.084982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.084999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.085019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.085035 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:46Z","lastTransitionTime":"2025-11-25T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.187192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.187243 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.187260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.187286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.187303 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:46Z","lastTransitionTime":"2025-11-25T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.290072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.290166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.290186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.290209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.290259 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:46Z","lastTransitionTime":"2025-11-25T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.392586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.392649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.392666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.392715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.392741 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:46Z","lastTransitionTime":"2025-11-25T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.495299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.495338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.495349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.495364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.495375 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:46Z","lastTransitionTime":"2025-11-25T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.597831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.597859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.597866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.597879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.597887 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:46Z","lastTransitionTime":"2025-11-25T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.621001 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.621077 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:46 crc kubenswrapper[4813]: E1125 10:32:46.621107 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:46 crc kubenswrapper[4813]: E1125 10:32:46.621233 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.699864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.699916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.699928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.699944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.699956 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:46Z","lastTransitionTime":"2025-11-25T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.802331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.802364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.802372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.802385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.802393 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:46Z","lastTransitionTime":"2025-11-25T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.904613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.904647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.904658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.904671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:46 crc kubenswrapper[4813]: I1125 10:32:46.904713 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:46Z","lastTransitionTime":"2025-11-25T10:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.008464 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.008525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.008535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.008556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.008571 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:47Z","lastTransitionTime":"2025-11-25T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.111266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.111318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.111327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.111341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.111351 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:47Z","lastTransitionTime":"2025-11-25T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.213904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.213974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.213983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.214003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.214019 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:47Z","lastTransitionTime":"2025-11-25T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.317946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.317998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.318009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.318029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.318041 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:47Z","lastTransitionTime":"2025-11-25T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.420142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.420196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.420204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.420221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.420231 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:47Z","lastTransitionTime":"2025-11-25T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.524025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.524086 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.524099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.524119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.524139 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:47Z","lastTransitionTime":"2025-11-25T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.621258 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:47 crc kubenswrapper[4813]: E1125 10:32:47.621424 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.621444 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:47 crc kubenswrapper[4813]: E1125 10:32:47.621586 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.626051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.626083 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.626094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.626110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.626122 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:47Z","lastTransitionTime":"2025-11-25T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.732608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.733112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.733127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.733154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.733171 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:47Z","lastTransitionTime":"2025-11-25T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.836600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.836642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.836656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.836673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.836711 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:47Z","lastTransitionTime":"2025-11-25T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.938935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.939004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.939015 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.939032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:47 crc kubenswrapper[4813]: I1125 10:32:47.939042 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:47Z","lastTransitionTime":"2025-11-25T10:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.041858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.041899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.041910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.041926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.041936 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:48Z","lastTransitionTime":"2025-11-25T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.144445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.144525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.144539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.144556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.144569 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:48Z","lastTransitionTime":"2025-11-25T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.247295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.247347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.247361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.247379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.247393 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:48Z","lastTransitionTime":"2025-11-25T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.350364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.350430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.350447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.350473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.350496 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:48Z","lastTransitionTime":"2025-11-25T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.453182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.453239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.453251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.453274 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.453288 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:48Z","lastTransitionTime":"2025-11-25T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.555982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.556070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.556092 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.556127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.556147 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:48Z","lastTransitionTime":"2025-11-25T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.620903 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.621045 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:48 crc kubenswrapper[4813]: E1125 10:32:48.621870 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:48 crc kubenswrapper[4813]: E1125 10:32:48.622316 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.622776 4813 scope.go:117] "RemoveContainer" containerID="0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf" Nov 25 10:32:48 crc kubenswrapper[4813]: E1125 10:32:48.623244 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.658712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.658772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.658784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.658809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.658829 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:48Z","lastTransitionTime":"2025-11-25T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.761885 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.761936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.761946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.761963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.761975 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:48Z","lastTransitionTime":"2025-11-25T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.864176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.864212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.864228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.864245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.864258 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:48Z","lastTransitionTime":"2025-11-25T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.966873 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.966909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.966921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.966940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:48 crc kubenswrapper[4813]: I1125 10:32:48.966952 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:48Z","lastTransitionTime":"2025-11-25T10:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.069454 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.069520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.069535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.069560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.069576 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:49Z","lastTransitionTime":"2025-11-25T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.171438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.171478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.171488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.171504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.171514 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:49Z","lastTransitionTime":"2025-11-25T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.273577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.273620 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.273631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.273647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.273660 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:49Z","lastTransitionTime":"2025-11-25T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.376495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.376545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.376628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.376650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.376660 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:49Z","lastTransitionTime":"2025-11-25T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.479768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.480112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.480253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.481003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.481052 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:49Z","lastTransitionTime":"2025-11-25T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.583800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.583836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.583862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.583875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.583885 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:49Z","lastTransitionTime":"2025-11-25T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.620554 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:49 crc kubenswrapper[4813]: E1125 10:32:49.620739 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.620873 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:49 crc kubenswrapper[4813]: E1125 10:32:49.621187 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.686038 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.686252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.686285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.686316 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.686341 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:49Z","lastTransitionTime":"2025-11-25T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.790157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.790216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.790235 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.790257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.790272 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:49Z","lastTransitionTime":"2025-11-25T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.892952 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.893001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.893011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.893035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.893049 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:49Z","lastTransitionTime":"2025-11-25T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.995495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.995585 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.995602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.995617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:49 crc kubenswrapper[4813]: I1125 10:32:49.995626 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:49Z","lastTransitionTime":"2025-11-25T10:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.098465 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.098539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.098548 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.098562 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.098572 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:50Z","lastTransitionTime":"2025-11-25T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.201124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.201176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.201189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.201206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.201218 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:50Z","lastTransitionTime":"2025-11-25T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.304313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.304361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.304371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.304390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.304404 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:50Z","lastTransitionTime":"2025-11-25T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.406567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.406638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.406652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.406697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.406734 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:50Z","lastTransitionTime":"2025-11-25T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.509614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.509646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.509654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.509666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.509700 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:50Z","lastTransitionTime":"2025-11-25T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.612508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.612553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.612565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.612582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.612596 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:50Z","lastTransitionTime":"2025-11-25T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.620710 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.620802 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:50 crc kubenswrapper[4813]: E1125 10:32:50.620819 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:50 crc kubenswrapper[4813]: E1125 10:32:50.620951 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.715857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.715929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.715947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.715975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.715997 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:50Z","lastTransitionTime":"2025-11-25T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.818843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.818893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.818904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.818926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.818941 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:50Z","lastTransitionTime":"2025-11-25T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.922074 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.922133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.922146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.922169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.922185 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:50Z","lastTransitionTime":"2025-11-25T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.953145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.953214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.953227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.953252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.953266 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:50Z","lastTransitionTime":"2025-11-25T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:50 crc kubenswrapper[4813]: E1125 10:32:50.966670 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:50Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.970270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.970313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.970325 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.970344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.970358 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:50Z","lastTransitionTime":"2025-11-25T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:50 crc kubenswrapper[4813]: E1125 10:32:50.983142 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:50Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.987120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.987148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.987156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.987170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:50 crc kubenswrapper[4813]: I1125 10:32:50.987178 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:50Z","lastTransitionTime":"2025-11-25T10:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:51 crc kubenswrapper[4813]: E1125 10:32:51.000652 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:50Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.005167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.005252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.005269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.005296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.005316 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:51Z","lastTransitionTime":"2025-11-25T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:51 crc kubenswrapper[4813]: E1125 10:32:51.019318 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:51Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.024257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.024320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.024333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.024355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.024370 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:51Z","lastTransitionTime":"2025-11-25T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:51 crc kubenswrapper[4813]: E1125 10:32:51.037912 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:51Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:51 crc kubenswrapper[4813]: E1125 10:32:51.038101 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.040732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.040814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.040823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.040839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.040850 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:51Z","lastTransitionTime":"2025-11-25T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.143309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.143341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.143352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.143366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.143375 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:51Z","lastTransitionTime":"2025-11-25T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.246151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.246502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.246602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.246724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.246817 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:51Z","lastTransitionTime":"2025-11-25T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.350838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.350906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.350924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.350950 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.350970 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:51Z","lastTransitionTime":"2025-11-25T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.453810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.453868 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.453878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.453898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.453911 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:51Z","lastTransitionTime":"2025-11-25T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.557364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.557416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.557427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.557449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.557462 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:51Z","lastTransitionTime":"2025-11-25T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.620705 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.620909 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:51 crc kubenswrapper[4813]: E1125 10:32:51.620998 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:51 crc kubenswrapper[4813]: E1125 10:32:51.621133 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.661053 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.661116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.661133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.661153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.661170 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:51Z","lastTransitionTime":"2025-11-25T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.694989 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:51 crc kubenswrapper[4813]: E1125 10:32:51.695153 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:51 crc kubenswrapper[4813]: E1125 10:32:51.695220 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs podName:74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2 nodeName:}" failed. No retries permitted until 2025-11-25 10:33:23.695200453 +0000 UTC m=+100.824910339 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs") pod "network-metrics-daemon-w28xl" (UID: "74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.764178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.764233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.764248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.764265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.764277 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:51Z","lastTransitionTime":"2025-11-25T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.867191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.867230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.867239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.867285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.867312 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:51Z","lastTransitionTime":"2025-11-25T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.970276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.970328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.970359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.970392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:51 crc kubenswrapper[4813]: I1125 10:32:51.970404 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:51Z","lastTransitionTime":"2025-11-25T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.073055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.073089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.073098 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.073113 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.073124 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:52Z","lastTransitionTime":"2025-11-25T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.175640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.175721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.175734 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.175770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.175782 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:52Z","lastTransitionTime":"2025-11-25T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.278264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.278317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.278329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.278346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.278357 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:52Z","lastTransitionTime":"2025-11-25T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.380575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.380612 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.380621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.380634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.380643 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:52Z","lastTransitionTime":"2025-11-25T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.482520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.482550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.482559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.482573 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.482582 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:52Z","lastTransitionTime":"2025-11-25T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.585165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.585206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.585217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.585232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.585243 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:52Z","lastTransitionTime":"2025-11-25T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.620861 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.620908 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:52 crc kubenswrapper[4813]: E1125 10:32:52.621033 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:52 crc kubenswrapper[4813]: E1125 10:32:52.621181 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.688429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.688471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.688487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.688503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.688513 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:52Z","lastTransitionTime":"2025-11-25T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.791012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.791060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.791072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.791088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.791098 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:52Z","lastTransitionTime":"2025-11-25T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.894312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.894368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.894381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.894400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.894411 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:52Z","lastTransitionTime":"2025-11-25T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.997467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.997530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.997542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.997560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:52 crc kubenswrapper[4813]: I1125 10:32:52.997572 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:52Z","lastTransitionTime":"2025-11-25T10:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.100129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.100176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.100188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.100204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.100215 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:53Z","lastTransitionTime":"2025-11-25T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.202527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.202571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.202581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.202595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.202605 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:53Z","lastTransitionTime":"2025-11-25T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.305745 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.305797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.305810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.305830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.305845 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:53Z","lastTransitionTime":"2025-11-25T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.409697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.409735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.409748 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.409763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.409774 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:53Z","lastTransitionTime":"2025-11-25T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.513471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.513518 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.513526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.513542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.513551 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:53Z","lastTransitionTime":"2025-11-25T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.616402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.616444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.616453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.616467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.616476 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:53Z","lastTransitionTime":"2025-11-25T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.621267 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.625283 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:53 crc kubenswrapper[4813]: E1125 10:32:53.625418 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:53 crc kubenswrapper[4813]: E1125 10:32:53.625250 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.639804 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.655137 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.666523 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.677397 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.689585 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.703990 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.718011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.718007 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.718049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.718410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.718449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.718463 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:53Z","lastTransitionTime":"2025-11-25T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.733899 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.745496 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.757423 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.778543 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:36Z\\\",\\\"message\\\":\\\"[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 10:32:36.408644 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:36.408623 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.791921 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.804443 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.819026 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.821135 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.821197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.821210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.821231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.821245 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:53Z","lastTransitionTime":"2025-11-25T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.831144 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.844635 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.859634 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:53Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.923164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.923286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.923390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.923625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:53 crc kubenswrapper[4813]: I1125 10:32:53.924233 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:53Z","lastTransitionTime":"2025-11-25T10:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.027323 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.027370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.027382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.027397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.027407 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:54Z","lastTransitionTime":"2025-11-25T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.130397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.130437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.130445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.130460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.130470 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:54Z","lastTransitionTime":"2025-11-25T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.233603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.233895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.234000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.234112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.234200 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:54Z","lastTransitionTime":"2025-11-25T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.337414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.337459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.337474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.337491 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.337503 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:54Z","lastTransitionTime":"2025-11-25T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.439348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.439397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.439410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.439428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.439465 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:54Z","lastTransitionTime":"2025-11-25T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.542720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.542778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.542796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.542820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.542837 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:54Z","lastTransitionTime":"2025-11-25T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.621349 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:54 crc kubenswrapper[4813]: E1125 10:32:54.621481 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.621357 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:54 crc kubenswrapper[4813]: E1125 10:32:54.621555 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.645971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.646020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.646033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.646052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.646064 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:54Z","lastTransitionTime":"2025-11-25T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.748938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.748994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.749006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.749023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.749037 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:54Z","lastTransitionTime":"2025-11-25T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.851404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.851460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.851472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.851490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.851505 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:54Z","lastTransitionTime":"2025-11-25T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.953828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.953861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.953869 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.953882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:54 crc kubenswrapper[4813]: I1125 10:32:54.953891 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:54Z","lastTransitionTime":"2025-11-25T10:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.056375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.056411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.056422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.056437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.056447 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:55Z","lastTransitionTime":"2025-11-25T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.160426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.160494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.160512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.160538 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.160552 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:55Z","lastTransitionTime":"2025-11-25T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.263176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.263213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.263224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.263240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.263251 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:55Z","lastTransitionTime":"2025-11-25T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.365174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.365206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.365213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.365226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.365236 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:55Z","lastTransitionTime":"2025-11-25T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.467872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.467912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.467924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.467964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.467999 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:55Z","lastTransitionTime":"2025-11-25T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.570036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.570089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.570104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.570123 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.570136 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:55Z","lastTransitionTime":"2025-11-25T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.620815 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:55 crc kubenswrapper[4813]: E1125 10:32:55.620972 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.621828 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:55 crc kubenswrapper[4813]: E1125 10:32:55.621952 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.672984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.673071 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.673093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.673111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.673125 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:55Z","lastTransitionTime":"2025-11-25T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.775510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.775556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.775573 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.775604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.775618 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:55Z","lastTransitionTime":"2025-11-25T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.877812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.877869 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.877885 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.878265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.878316 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:55Z","lastTransitionTime":"2025-11-25T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.980343 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlpbx_98439068-3c89-4c1b-8bb8-8aa848ef0cd3/kube-multus/0.log" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.980434 4813 generic.go:334] "Generic (PLEG): container finished" podID="98439068-3c89-4c1b-8bb8-8aa848ef0cd3" containerID="73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e" exitCode=1 Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.980497 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rlpbx" event={"ID":"98439068-3c89-4c1b-8bb8-8aa848ef0cd3","Type":"ContainerDied","Data":"73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e"} Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.980733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.980766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.980774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.980788 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.980798 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:55Z","lastTransitionTime":"2025-11-25T10:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:55 crc kubenswrapper[4813]: I1125 10:32:55.981242 4813 scope.go:117] "RemoveContainer" containerID="73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.001338 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:55Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.015796 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.030264 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.042050 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.054256 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.066454 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.080513 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.082982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.083006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.083388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.083406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.083419 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:56Z","lastTransitionTime":"2025-11-25T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.093733 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.104740 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:55Z\\\",\\\"message\\\":\\\"2025-11-25T10:32:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27\\\\n2025-11-25T10:32:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27 to /host/opt/cni/bin/\\\\n2025-11-25T10:32:10Z [verbose] multus-daemon started\\\\n2025-11-25T10:32:10Z [verbose] Readiness Indicator file check\\\\n2025-11-25T10:32:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.120786 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.135862 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.158733 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:36Z\\\",\\\"message\\\":\\\"[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 10:32:36.408644 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:36.408623 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.170856 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.184736 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.185769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.185806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.185824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.185840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.185851 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:56Z","lastTransitionTime":"2025-11-25T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.200017 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.213967 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.226281 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:56Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.287923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.287957 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.287968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.287984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.287995 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:56Z","lastTransitionTime":"2025-11-25T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.390804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.390849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.390861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.390877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.390889 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:56Z","lastTransitionTime":"2025-11-25T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.493082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.493133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.493147 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.493164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.493176 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:56Z","lastTransitionTime":"2025-11-25T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.595806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.595859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.595878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.595901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.595917 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:56Z","lastTransitionTime":"2025-11-25T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.620892 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:56 crc kubenswrapper[4813]: E1125 10:32:56.621092 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.620904 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:56 crc kubenswrapper[4813]: E1125 10:32:56.621490 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.699054 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.699295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.699407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.699490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.699553 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:56Z","lastTransitionTime":"2025-11-25T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.801809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.802129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.802247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.802415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.802532 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:56Z","lastTransitionTime":"2025-11-25T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.905915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.905976 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.905994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.906018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.906037 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:56Z","lastTransitionTime":"2025-11-25T10:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.988613 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlpbx_98439068-3c89-4c1b-8bb8-8aa848ef0cd3/kube-multus/0.log" Nov 25 10:32:56 crc kubenswrapper[4813]: I1125 10:32:56.988711 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rlpbx" event={"ID":"98439068-3c89-4c1b-8bb8-8aa848ef0cd3","Type":"ContainerStarted","Data":"e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c"} Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.007952 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.007991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.007999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.008013 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.008022 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:57Z","lastTransitionTime":"2025-11-25T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.010096 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.023986 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.036508 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.049009 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.063036 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.074742 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.086605 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:55Z\\\",\\\"message\\\":\\\"2025-11-25T10:32:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27\\\\n2025-11-25T10:32:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27 to /host/opt/cni/bin/\\\\n2025-11-25T10:32:10Z [verbose] multus-daemon started\\\\n2025-11-25T10:32:10Z [verbose] Readiness Indicator file check\\\\n2025-11-25T10:32:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.097238 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.108751 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.110107 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.110154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.110171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.110225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.110243 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:57Z","lastTransitionTime":"2025-11-25T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.127934 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.141381 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.152182 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.163006 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.195858 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.212763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.212810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.212823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.212839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.212874 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:57Z","lastTransitionTime":"2025-11-25T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.225123 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:36Z\\\",\\\"message\\\":\\\"[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 10:32:36.408644 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:36.408623 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.236246 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.248201 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:57Z is after 2025-08-24T17:21:41Z" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.315019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.315069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.315080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.315108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.315117 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:57Z","lastTransitionTime":"2025-11-25T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.417416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.417478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.417488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.417501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.417509 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:57Z","lastTransitionTime":"2025-11-25T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.524156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.524204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.524216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.524237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.524246 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:57Z","lastTransitionTime":"2025-11-25T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.621139 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.621162 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:57 crc kubenswrapper[4813]: E1125 10:32:57.621326 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:57 crc kubenswrapper[4813]: E1125 10:32:57.621356 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.625983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.626023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.626040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.626058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.626075 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:57Z","lastTransitionTime":"2025-11-25T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.729354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.729393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.729407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.729423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.729435 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:57Z","lastTransitionTime":"2025-11-25T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.831222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.831248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.831256 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.831268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.831277 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:57Z","lastTransitionTime":"2025-11-25T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.934292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.934330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.934342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.934357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:57 crc kubenswrapper[4813]: I1125 10:32:57.934370 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:57Z","lastTransitionTime":"2025-11-25T10:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.037939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.038008 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.038035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.038068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.038096 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:58Z","lastTransitionTime":"2025-11-25T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.142321 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.142377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.142392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.142413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.142429 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:58Z","lastTransitionTime":"2025-11-25T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.245062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.245093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.245102 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.245115 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.245125 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:58Z","lastTransitionTime":"2025-11-25T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.347523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.347578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.347602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.347629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.347643 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:58Z","lastTransitionTime":"2025-11-25T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.449763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.449797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.449806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.449818 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.449828 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:58Z","lastTransitionTime":"2025-11-25T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.553387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.553446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.553461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.553486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.553531 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:58Z","lastTransitionTime":"2025-11-25T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.620773 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.620980 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:32:58 crc kubenswrapper[4813]: E1125 10:32:58.621122 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:32:58 crc kubenswrapper[4813]: E1125 10:32:58.620971 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.656916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.656978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.656992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.657022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.657036 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:58Z","lastTransitionTime":"2025-11-25T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.760317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.760353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.760361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.760376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.760386 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:58Z","lastTransitionTime":"2025-11-25T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.862929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.862973 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.862984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.863002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.863014 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:58Z","lastTransitionTime":"2025-11-25T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.966239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.966288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.966299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.966317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:58 crc kubenswrapper[4813]: I1125 10:32:58.966331 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:58Z","lastTransitionTime":"2025-11-25T10:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.068186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.068258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.068277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.068302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.068319 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:59Z","lastTransitionTime":"2025-11-25T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.171832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.172600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.172717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.172904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.173036 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:59Z","lastTransitionTime":"2025-11-25T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.276502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.276973 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.277040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.277112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.277764 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:59Z","lastTransitionTime":"2025-11-25T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.381469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.381560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.381577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.381601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.381617 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:59Z","lastTransitionTime":"2025-11-25T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.485629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.485709 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.485723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.485742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.485754 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:59Z","lastTransitionTime":"2025-11-25T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.588631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.588671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.588705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.588725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.588736 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:59Z","lastTransitionTime":"2025-11-25T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.621414 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:32:59 crc kubenswrapper[4813]: E1125 10:32:59.621550 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.621875 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:32:59 crc kubenswrapper[4813]: E1125 10:32:59.622101 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.690936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.690983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.691000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.691022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.691036 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:59Z","lastTransitionTime":"2025-11-25T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.793523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.793568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.793582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.793599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.793610 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:59Z","lastTransitionTime":"2025-11-25T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.900365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.900421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.900436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.900456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:32:59 crc kubenswrapper[4813]: I1125 10:32:59.900470 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:32:59Z","lastTransitionTime":"2025-11-25T10:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.002663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.002753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.002768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.002788 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.002803 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:00Z","lastTransitionTime":"2025-11-25T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.106736 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.106780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.106790 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.106807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.106818 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:00Z","lastTransitionTime":"2025-11-25T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.212036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.212110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.212126 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.212144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.212156 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:00Z","lastTransitionTime":"2025-11-25T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.314781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.315067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.315167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.315274 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.315361 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:00Z","lastTransitionTime":"2025-11-25T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.417907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.417950 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.417959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.417972 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.417981 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:00Z","lastTransitionTime":"2025-11-25T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.520598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.520640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.520653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.520668 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.520692 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:00Z","lastTransitionTime":"2025-11-25T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.620913 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.620950 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:00 crc kubenswrapper[4813]: E1125 10:33:00.621131 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:00 crc kubenswrapper[4813]: E1125 10:33:00.621193 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.622636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.622674 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.622702 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.622718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.622730 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:00Z","lastTransitionTime":"2025-11-25T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.725383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.725471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.725497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.725527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.725550 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:00Z","lastTransitionTime":"2025-11-25T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.829020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.829078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.829094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.829117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.829139 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:00Z","lastTransitionTime":"2025-11-25T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.931660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.931726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.931736 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.931754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:00 crc kubenswrapper[4813]: I1125 10:33:00.931766 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:00Z","lastTransitionTime":"2025-11-25T10:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.035048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.035118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.035137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.035161 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.035182 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.069159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.069229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.069248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.069275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.069295 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: E1125 10:33:01.087115 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:01Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.093076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.093131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.093153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.093184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.093206 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: E1125 10:33:01.115911 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:01Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.121319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.121367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.121381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.121402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.121419 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: E1125 10:33:01.139245 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:01Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.144637 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.144698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.144713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.144732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.144747 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: E1125 10:33:01.163310 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:01Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.169257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.169322 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.169336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.169361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.169375 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: E1125 10:33:01.184124 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:01Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:01 crc kubenswrapper[4813]: E1125 10:33:01.184296 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.186153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.186203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.186214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.186235 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.186250 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.290080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.290179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.290206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.290262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.290292 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.393326 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.393376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.393388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.393403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.393414 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.496344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.496406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.496423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.496448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.496468 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.599455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.599507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.599519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.599539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.599554 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.621187 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.621290 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:01 crc kubenswrapper[4813]: E1125 10:33:01.621724 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:01 crc kubenswrapper[4813]: E1125 10:33:01.621841 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.703420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.703488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.703514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.703549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.703573 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.807497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.808047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.808247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.808433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.808586 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.912471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.912551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.912574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.912605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:01 crc kubenswrapper[4813]: I1125 10:33:01.912748 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:01Z","lastTransitionTime":"2025-11-25T10:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.016427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.016467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.016479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.016501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.016515 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:02Z","lastTransitionTime":"2025-11-25T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.118437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.118491 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.118506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.118524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.118536 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:02Z","lastTransitionTime":"2025-11-25T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.221752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.221813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.221827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.221849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.221868 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:02Z","lastTransitionTime":"2025-11-25T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.325018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.325078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.325098 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.325123 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.325141 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:02Z","lastTransitionTime":"2025-11-25T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.428144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.428193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.428203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.428219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.428233 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:02Z","lastTransitionTime":"2025-11-25T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.531890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.531942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.531954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.531975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.531996 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:02Z","lastTransitionTime":"2025-11-25T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.621020 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.621168 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:02 crc kubenswrapper[4813]: E1125 10:33:02.621305 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:02 crc kubenswrapper[4813]: E1125 10:33:02.621470 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.635291 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.635363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.635375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.635396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.635407 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:02Z","lastTransitionTime":"2025-11-25T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.738076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.738133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.738154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.738179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.738193 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:02Z","lastTransitionTime":"2025-11-25T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.841374 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.841444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.841458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.841481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.841496 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:02Z","lastTransitionTime":"2025-11-25T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.944190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.944237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.944250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.944269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:02 crc kubenswrapper[4813]: I1125 10:33:02.944284 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:02Z","lastTransitionTime":"2025-11-25T10:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.047003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.047045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.047058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.047074 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.047087 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:03Z","lastTransitionTime":"2025-11-25T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.149668 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.149740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.149776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.149796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.149809 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:03Z","lastTransitionTime":"2025-11-25T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.252116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.252175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.252191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.252215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.252228 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:03Z","lastTransitionTime":"2025-11-25T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.355094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.355148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.355162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.355180 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.355193 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:03Z","lastTransitionTime":"2025-11-25T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.457824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.457884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.457899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.457925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.457940 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:03Z","lastTransitionTime":"2025-11-25T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.561010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.561064 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.561077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.561106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.561119 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:03Z","lastTransitionTime":"2025-11-25T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.621003 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:03 crc kubenswrapper[4813]: E1125 10:33:03.621167 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.621656 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:03 crc kubenswrapper[4813]: E1125 10:33:03.621749 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.622526 4813 scope.go:117] "RemoveContainer" containerID="0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.648190 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.664158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.664215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.664225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.664245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.664258 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:03Z","lastTransitionTime":"2025-11-25T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.671030 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.687942 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.705496 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.731493 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.755468 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:36Z\\\",\\\"message\\\":\\\"[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 10:32:36.408644 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:36.408623 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.768474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.768525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.768542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.768572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.768588 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:03Z","lastTransitionTime":"2025-11-25T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.772874 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.793019 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.807993 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.819698 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.831268 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.842654 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.854086 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.865236 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.870387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.870434 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.870448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.870466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.870481 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:03Z","lastTransitionTime":"2025-11-25T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.876818 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:55Z\\\",\\\"message\\\":\\\"2025-11-25T10:32:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27\\\\n2025-11-25T10:32:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27 to /host/opt/cni/bin/\\\\n2025-11-25T10:32:10Z [verbose] multus-daemon started\\\\n2025-11-25T10:32:10Z [verbose] Readiness Indicator file check\\\\n2025-11-25T10:32:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.887249 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.899641 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:03Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.972940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.972984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.972997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.973020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:03 crc kubenswrapper[4813]: I1125 10:33:03.973032 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:03Z","lastTransitionTime":"2025-11-25T10:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.016761 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/2.log" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.021622 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b"} Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.022419 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.037610 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.055626 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.075502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.075788 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.075865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.075940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.076005 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:04Z","lastTransitionTime":"2025-11-25T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.076362 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.090336 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.119009 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.133279 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.153348 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.166374 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.178436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.178507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.178519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.178536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.178549 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:04Z","lastTransitionTime":"2025-11-25T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.180732 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:55Z\\\",\\\"message\\\":\\\"2025-11-25T10:32:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27\\\\n2025-11-25T10:32:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27 to /host/opt/cni/bin/\\\\n2025-11-25T10:32:10Z [verbose] multus-daemon started\\\\n2025-11-25T10:32:10Z [verbose] Readiness Indicator file check\\\\n2025-11-25T10:32:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.193646 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.209606 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.232763 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:36Z\\\",\\\"message\\\":\\\"[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 10:32:36.408644 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:36.408623 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.247896 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.264109 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.277620 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.281306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.281396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.281590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.281616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.281643 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:04Z","lastTransitionTime":"2025-11-25T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.295077 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.315663 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:04Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.384632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.384690 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.384704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.384729 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.384744 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:04Z","lastTransitionTime":"2025-11-25T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.487174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.487217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.487228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.487242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.487252 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:04Z","lastTransitionTime":"2025-11-25T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.590548 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.590615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.590629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.590693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.590711 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:04Z","lastTransitionTime":"2025-11-25T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.621494 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.621510 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:04 crc kubenswrapper[4813]: E1125 10:33:04.621755 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:04 crc kubenswrapper[4813]: E1125 10:33:04.621617 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.693783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.693845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.693854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.693875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.693894 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:04Z","lastTransitionTime":"2025-11-25T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.796811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.796865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.796881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.796901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.796917 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:04Z","lastTransitionTime":"2025-11-25T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.900478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.900538 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.900557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.900582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:04 crc kubenswrapper[4813]: I1125 10:33:04.900599 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:04Z","lastTransitionTime":"2025-11-25T10:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.003285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.003327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.003339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.003357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.003368 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:05Z","lastTransitionTime":"2025-11-25T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.026968 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/3.log" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.027537 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/2.log" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.030066 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b" exitCode=1 Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.030110 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b"} Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.030156 4813 scope.go:117] "RemoveContainer" containerID="0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.030723 4813 scope.go:117] "RemoveContainer" containerID="c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b" Nov 25 10:33:05 crc kubenswrapper[4813]: E1125 10:33:05.030995 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.044789 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.055102 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.068317 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.083706 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.100385 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.105794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.105976 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.106078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.106181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.106262 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:05Z","lastTransitionTime":"2025-11-25T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.120105 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.134067 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.155490 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:55Z\\\",\\\"message\\\":\\\"2025-11-25T10:32:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27\\\\n2025-11-25T10:32:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27 to /host/opt/cni/bin/\\\\n2025-11-25T10:32:10Z [verbose] multus-daemon started\\\\n2025-11-25T10:32:10Z [verbose] Readiness Indicator file check\\\\n2025-11-25T10:32:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.169876 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.187198 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.208255 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f47ead7e465395c7960e5ab292e2f2869ed1630436f2739b1e0420f217a96cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:36Z\\\",\\\"message\\\":\\\"[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1125 10:32:36.408644 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:32:36Z is after 2025-08-24T17:21:41Z]\\\\nI1125 10:32:36.408623 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:33:04Z\\\",\\\"message\\\":\\\"-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 10:33:04.482189 6884 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.208937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.208988 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.209001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.209022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.209035 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:05Z","lastTransitionTime":"2025-11-25T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.224072 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.238402 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.254274 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.269118 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.282518 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.305080 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:05Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.312273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.312305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.312317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.312334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.312347 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:05Z","lastTransitionTime":"2025-11-25T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.415129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.415210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.415223 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.415244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.415256 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:05Z","lastTransitionTime":"2025-11-25T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.517659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.517766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.517780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.517798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.517810 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:05Z","lastTransitionTime":"2025-11-25T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.620622 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.620664 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.620731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.620747 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.620776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.620793 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:05Z","lastTransitionTime":"2025-11-25T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.620742 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:05 crc kubenswrapper[4813]: E1125 10:33:05.620842 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:05 crc kubenswrapper[4813]: E1125 10:33:05.621026 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.724069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.724124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.724138 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.724156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.724170 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:05Z","lastTransitionTime":"2025-11-25T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.826746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.826794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.826803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.826821 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.826831 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:05Z","lastTransitionTime":"2025-11-25T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.930207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.930258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.930271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.930288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:05 crc kubenswrapper[4813]: I1125 10:33:05.930298 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:05Z","lastTransitionTime":"2025-11-25T10:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.032505 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.032565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.032586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.032611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.032628 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:06Z","lastTransitionTime":"2025-11-25T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.035414 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/3.log" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.039590 4813 scope.go:117] "RemoveContainer" containerID="c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b" Nov 25 10:33:06 crc kubenswrapper[4813]: E1125 10:33:06.039788 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.059832 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.074033 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.087406 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:55Z\\\",\\\"message\\\":\\\"2025-11-25T10:32:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27\\\\n2025-11-25T10:32:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27 to /host/opt/cni/bin/\\\\n2025-11-25T10:32:10Z [verbose] multus-daemon started\\\\n2025-11-25T10:32:10Z [verbose] Readiness Indicator file check\\\\n2025-11-25T10:32:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.100223 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.112290 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.126780 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.136052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.136115 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.136131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.136156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.136177 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:06Z","lastTransitionTime":"2025-11-25T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.142383 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.159240 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.174096 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.190721 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.213222 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:33:04Z\\\",\\\"message\\\":\\\"-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 10:33:04.482189 6884 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:33:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.224371 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.237716 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.238508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.238589 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.238604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.238621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.238631 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:06Z","lastTransitionTime":"2025-11-25T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.252699 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.265111 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.275996 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.293746 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:06Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.342103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.342149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.342159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.342173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.342183 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:06Z","lastTransitionTime":"2025-11-25T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.444320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.444381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.444396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.444417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.444431 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:06Z","lastTransitionTime":"2025-11-25T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.547650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.547732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.547748 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.547769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.547784 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:06Z","lastTransitionTime":"2025-11-25T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.621454 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.621625 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:06 crc kubenswrapper[4813]: E1125 10:33:06.622129 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:06 crc kubenswrapper[4813]: E1125 10:33:06.621925 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.651821 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.652096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.652185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.652281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.652431 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:06Z","lastTransitionTime":"2025-11-25T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.754662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.754714 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.754723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.754735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.754743 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:06Z","lastTransitionTime":"2025-11-25T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.857817 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.857893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.857908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.857932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.857948 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:06Z","lastTransitionTime":"2025-11-25T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.960791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.960871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.960888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.960913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:06 crc kubenswrapper[4813]: I1125 10:33:06.960929 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:06Z","lastTransitionTime":"2025-11-25T10:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.064000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.064062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.064078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.064102 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.064122 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:07Z","lastTransitionTime":"2025-11-25T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.167130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.167377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.167444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.167519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.167592 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:07Z","lastTransitionTime":"2025-11-25T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.271308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.271601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.271671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.271783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.271849 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:07Z","lastTransitionTime":"2025-11-25T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.374948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.375323 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.375481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.375630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.375818 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:07Z","lastTransitionTime":"2025-11-25T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.478270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.478987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.479155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.479306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.479451 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:07Z","lastTransitionTime":"2025-11-25T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.581477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.581509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.581519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.581534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.581544 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:07Z","lastTransitionTime":"2025-11-25T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.620890 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.621086 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.621499 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.621669 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.684199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.684240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.684250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.684265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.684274 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:07Z","lastTransitionTime":"2025-11-25T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.779043 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.779327 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.779834 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.779815429 +0000 UTC m=+148.909525315 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.787926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.787958 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.788012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.788027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.788036 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:07Z","lastTransitionTime":"2025-11-25T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.880230 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.880419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.880472 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.880521 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.880578 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.880551319 +0000 UTC m=+149.010261255 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.880653 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.880671 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.880711 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.880723 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.880750 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.880765 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.880754274 +0000 UTC m=+149.010464160 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.880811 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.880791545 +0000 UTC m=+149.010501501 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.880713 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.880839 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:33:07 crc kubenswrapper[4813]: E1125 10:33:07.880900 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.880874918 +0000 UTC m=+149.010584804 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.889977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.890010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.890019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.890034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.890044 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:07Z","lastTransitionTime":"2025-11-25T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.992725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.992784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.992803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.992828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:07 crc kubenswrapper[4813]: I1125 10:33:07.992844 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:07Z","lastTransitionTime":"2025-11-25T10:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.095962 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.096022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.096041 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.096067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.096088 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:08Z","lastTransitionTime":"2025-11-25T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.199280 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.199363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.199382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.199405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.199420 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:08Z","lastTransitionTime":"2025-11-25T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.302907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.303077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.303107 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.303136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.303155 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:08Z","lastTransitionTime":"2025-11-25T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.406462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.406526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.406540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.406556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.406568 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:08Z","lastTransitionTime":"2025-11-25T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.510715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.510786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.510803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.510826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.510839 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:08Z","lastTransitionTime":"2025-11-25T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.615132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.615189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.615200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.615220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.615237 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:08Z","lastTransitionTime":"2025-11-25T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.621359 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.621372 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:08 crc kubenswrapper[4813]: E1125 10:33:08.621510 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:08 crc kubenswrapper[4813]: E1125 10:33:08.621600 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.718583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.718651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.718670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.718732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.718754 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:08Z","lastTransitionTime":"2025-11-25T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.821452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.821523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.821547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.821576 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.821598 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:08Z","lastTransitionTime":"2025-11-25T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.926459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.926598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.926619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.926644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:08 crc kubenswrapper[4813]: I1125 10:33:08.926803 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:08Z","lastTransitionTime":"2025-11-25T10:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.029749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.029789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.029799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.029817 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.029829 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:09Z","lastTransitionTime":"2025-11-25T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.132155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.132195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.132208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.132224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.132236 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:09Z","lastTransitionTime":"2025-11-25T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.235066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.235134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.235146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.235164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.235177 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:09Z","lastTransitionTime":"2025-11-25T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.337845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.337877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.337888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.337904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.337914 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:09Z","lastTransitionTime":"2025-11-25T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.440136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.440181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.440193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.440210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.440224 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:09Z","lastTransitionTime":"2025-11-25T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.543013 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.543080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.543090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.543110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.543123 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:09Z","lastTransitionTime":"2025-11-25T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.621514 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.621605 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:09 crc kubenswrapper[4813]: E1125 10:33:09.621714 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:09 crc kubenswrapper[4813]: E1125 10:33:09.621805 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.645525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.645569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.645581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.645597 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.645608 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:09Z","lastTransitionTime":"2025-11-25T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.748653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.748715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.748728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.748742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.748753 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:09Z","lastTransitionTime":"2025-11-25T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.851631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.851731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.851766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.851796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.851818 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:09Z","lastTransitionTime":"2025-11-25T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.953851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.953918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.953930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.953946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:09 crc kubenswrapper[4813]: I1125 10:33:09.953957 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:09Z","lastTransitionTime":"2025-11-25T10:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.055646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.055713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.055726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.055740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.055750 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:10Z","lastTransitionTime":"2025-11-25T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.157801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.157837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.157845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.157859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.157869 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:10Z","lastTransitionTime":"2025-11-25T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.260579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.260925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.261040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.261142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.261237 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:10Z","lastTransitionTime":"2025-11-25T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.364147 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.364189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.364197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.364211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.364220 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:10Z","lastTransitionTime":"2025-11-25T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.467074 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.467110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.467128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.467149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.467160 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:10Z","lastTransitionTime":"2025-11-25T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.569527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.569589 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.569602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.569617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.569630 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:10Z","lastTransitionTime":"2025-11-25T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.620391 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.620439 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:10 crc kubenswrapper[4813]: E1125 10:33:10.620507 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:10 crc kubenswrapper[4813]: E1125 10:33:10.620643 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.672993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.673038 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.673051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.673069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.673081 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:10Z","lastTransitionTime":"2025-11-25T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.775777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.775849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.775867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.775914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.775936 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:10Z","lastTransitionTime":"2025-11-25T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.878783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.879124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.879231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.879318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.879386 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:10Z","lastTransitionTime":"2025-11-25T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.982331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.982604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.982791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.982951 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:10 crc kubenswrapper[4813]: I1125 10:33:10.983024 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:10Z","lastTransitionTime":"2025-11-25T10:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.085458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.085766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.085878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.085958 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.086024 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.189562 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.189614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.189624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.189643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.189657 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.293068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.293114 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.293132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.293151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.293166 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.390439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.390526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.390540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.390563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.390581 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: E1125 10:33:11.415530 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.420949 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.421012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.421039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.421069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.421092 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: E1125 10:33:11.434491 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.438417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.438445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.438454 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.438468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.438477 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: E1125 10:33:11.452328 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.457158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.457218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.457236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.457262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.457279 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: E1125 10:33:11.471729 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.476796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.476834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.476845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.476859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.476868 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: E1125 10:33:11.489146 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:11Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:11 crc kubenswrapper[4813]: E1125 10:33:11.489325 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.491839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.491882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.491896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.491911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.491921 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.594735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.594791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.594803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.594825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.594840 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.621271 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.621321 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:11 crc kubenswrapper[4813]: E1125 10:33:11.621463 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:11 crc kubenswrapper[4813]: E1125 10:33:11.621626 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.698203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.698250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.698262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.698279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.698292 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.801226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.801270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.801282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.801299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.801311 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.904774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.904825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.904839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.904859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:11 crc kubenswrapper[4813]: I1125 10:33:11.904874 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:11Z","lastTransitionTime":"2025-11-25T10:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.007750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.007792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.007802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.007819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.007829 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:12Z","lastTransitionTime":"2025-11-25T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.110141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.110189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.110205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.110224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.110237 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:12Z","lastTransitionTime":"2025-11-25T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.212688 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.212906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.212987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.213099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.213180 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:12Z","lastTransitionTime":"2025-11-25T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.316362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.316406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.316416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.316429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.316438 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:12Z","lastTransitionTime":"2025-11-25T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.418499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.418536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.418545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.418557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.418567 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:12Z","lastTransitionTime":"2025-11-25T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.520823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.520923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.520941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.520965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.520982 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:12Z","lastTransitionTime":"2025-11-25T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.621456 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.621487 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:12 crc kubenswrapper[4813]: E1125 10:33:12.621755 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:12 crc kubenswrapper[4813]: E1125 10:33:12.622290 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.623586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.623638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.623654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.623692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.623704 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:12Z","lastTransitionTime":"2025-11-25T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.631657 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.726722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.726772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.726782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.726830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.726841 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:12Z","lastTransitionTime":"2025-11-25T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.830388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.830452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.830467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.830496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.830512 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:12Z","lastTransitionTime":"2025-11-25T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.933251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.933319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.933341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.933373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:12 crc kubenswrapper[4813]: I1125 10:33:12.933396 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:12Z","lastTransitionTime":"2025-11-25T10:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.036236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.036302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.036325 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.036354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.036376 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:13Z","lastTransitionTime":"2025-11-25T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.139111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.139184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.139207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.139236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.139259 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:13Z","lastTransitionTime":"2025-11-25T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.242370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.242437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.242455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.242478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.242495 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:13Z","lastTransitionTime":"2025-11-25T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.345823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.345899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.345910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.345933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.345943 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:13Z","lastTransitionTime":"2025-11-25T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.449154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.449232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.449254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.449284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.449307 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:13Z","lastTransitionTime":"2025-11-25T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.551526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.551566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.551575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.551589 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.551601 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:13Z","lastTransitionTime":"2025-11-25T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.621300 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.621463 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:13 crc kubenswrapper[4813]: E1125 10:33:13.621653 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:13 crc kubenswrapper[4813]: E1125 10:33:13.621881 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.634538 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.647391 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e26a5c-3d20-48c4-b0aa-e7e7c439a18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71ac844ac0be61d9aa56028670f20db4c9c600feffd4355d9636253b7d50e18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://312228ac2cd8a213ffcac9564ff0abe8b6f330abca932992170d2f6ccea5edb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312228ac2cd8a213ffcac9564ff0abe8b6f330abca932992170d2f6ccea5edb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.653641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.653725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.653743 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.653763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.653778 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:13Z","lastTransitionTime":"2025-11-25T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.662721 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.675020 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.688174 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.701524 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.725522 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:33:04Z\\\",\\\"message\\\":\\\"-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 10:33:04.482189 6884 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:33:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.735599 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.749117 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.756322 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.756373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.756386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.756402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.756415 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:13Z","lastTransitionTime":"2025-11-25T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.765019 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.777856 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.793614 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.807284 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.822907 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.833387 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.846699 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:55Z\\\",\\\"message\\\":\\\"2025-11-25T10:32:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27\\\\n2025-11-25T10:32:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27 to /host/opt/cni/bin/\\\\n2025-11-25T10:32:10Z [verbose] multus-daemon started\\\\n2025-11-25T10:32:10Z [verbose] Readiness Indicator file check\\\\n2025-11-25T10:32:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.858386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.858419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.858430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.858449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.858462 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:13Z","lastTransitionTime":"2025-11-25T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.860131 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.871077 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:13Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.960309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.960362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.960378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.960399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:13 crc kubenswrapper[4813]: I1125 10:33:13.960414 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:13Z","lastTransitionTime":"2025-11-25T10:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.062909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.062984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.063007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.063029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.063049 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:14Z","lastTransitionTime":"2025-11-25T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.165971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.166021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.166033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.166052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.166066 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:14Z","lastTransitionTime":"2025-11-25T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.292920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.292980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.292997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.293019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.293036 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:14Z","lastTransitionTime":"2025-11-25T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.397629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.397781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.397812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.397842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.397861 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:14Z","lastTransitionTime":"2025-11-25T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.500641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.500768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.500788 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.500817 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.500837 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:14Z","lastTransitionTime":"2025-11-25T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.603482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.603520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.603530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.603546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.603557 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:14Z","lastTransitionTime":"2025-11-25T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.621033 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.621050 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:14 crc kubenswrapper[4813]: E1125 10:33:14.621155 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:14 crc kubenswrapper[4813]: E1125 10:33:14.621311 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.706713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.706758 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.706772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.706800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.706818 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:14Z","lastTransitionTime":"2025-11-25T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.810412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.810479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.810504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.810531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.810552 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:14Z","lastTransitionTime":"2025-11-25T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.913605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.913650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.913660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.913674 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:14 crc kubenswrapper[4813]: I1125 10:33:14.913707 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:14Z","lastTransitionTime":"2025-11-25T10:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.016253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.016313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.016331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.016354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.016372 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:15Z","lastTransitionTime":"2025-11-25T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.119649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.119752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.119778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.119807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.119831 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:15Z","lastTransitionTime":"2025-11-25T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.223296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.223348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.223365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.223387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.223426 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:15Z","lastTransitionTime":"2025-11-25T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.326601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.326670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.326717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.326742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.326759 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:15Z","lastTransitionTime":"2025-11-25T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.429795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.429835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.429844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.429856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.429868 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:15Z","lastTransitionTime":"2025-11-25T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.533338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.533401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.533431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.533460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.533530 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:15Z","lastTransitionTime":"2025-11-25T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.621560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.621606 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:15 crc kubenswrapper[4813]: E1125 10:33:15.621839 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:15 crc kubenswrapper[4813]: E1125 10:33:15.622018 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.636047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.636119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.636143 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.636166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.636185 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:15Z","lastTransitionTime":"2025-11-25T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.738957 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.739000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.739009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.739023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.739032 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:15Z","lastTransitionTime":"2025-11-25T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.841970 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.842026 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.842037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.842055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.842067 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:15Z","lastTransitionTime":"2025-11-25T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.944721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.944763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.944773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.944789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:15 crc kubenswrapper[4813]: I1125 10:33:15.944799 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:15Z","lastTransitionTime":"2025-11-25T10:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.047379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.047421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.047431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.047448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.047460 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:16Z","lastTransitionTime":"2025-11-25T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.149752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.149872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.149906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.149944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.149965 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:16Z","lastTransitionTime":"2025-11-25T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.252134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.252189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.252207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.252225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.252237 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:16Z","lastTransitionTime":"2025-11-25T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.354829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.354878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.354890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.354908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.354924 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:16Z","lastTransitionTime":"2025-11-25T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.462933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.462977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.462988 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.463004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.463016 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:16Z","lastTransitionTime":"2025-11-25T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.566532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.566577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.566589 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.566605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.566617 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:16Z","lastTransitionTime":"2025-11-25T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.621569 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.621589 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:16 crc kubenswrapper[4813]: E1125 10:33:16.621872 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:16 crc kubenswrapper[4813]: E1125 10:33:16.621979 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.669381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.669419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.669429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.669449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.669460 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:16Z","lastTransitionTime":"2025-11-25T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.772145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.772186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.772196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.772211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.772221 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:16Z","lastTransitionTime":"2025-11-25T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.874767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.874804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.874812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.874826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.874834 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:16Z","lastTransitionTime":"2025-11-25T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.978264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.978314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.978333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.978353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:16 crc kubenswrapper[4813]: I1125 10:33:16.978367 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:16Z","lastTransitionTime":"2025-11-25T10:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.080937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.080977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.080992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.081010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.081023 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:17Z","lastTransitionTime":"2025-11-25T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.183730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.183775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.183787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.183804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.183815 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:17Z","lastTransitionTime":"2025-11-25T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.286871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.286926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.286942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.286961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.286977 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:17Z","lastTransitionTime":"2025-11-25T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.389842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.390149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.390259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.390355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.390456 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:17Z","lastTransitionTime":"2025-11-25T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.493416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.493481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.493502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.493527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.493545 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:17Z","lastTransitionTime":"2025-11-25T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.597132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.597208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.597227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.597261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.597282 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:17Z","lastTransitionTime":"2025-11-25T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.621294 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:17 crc kubenswrapper[4813]: E1125 10:33:17.621435 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.621450 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:17 crc kubenswrapper[4813]: E1125 10:33:17.621548 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.622071 4813 scope.go:117] "RemoveContainer" containerID="c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b" Nov 25 10:33:17 crc kubenswrapper[4813]: E1125 10:33:17.622204 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.700106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.700142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.700151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.700165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.700175 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:17Z","lastTransitionTime":"2025-11-25T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.802937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.802987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.802996 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.803011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.803020 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:17Z","lastTransitionTime":"2025-11-25T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.905946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.906023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.906047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.906075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:17 crc kubenswrapper[4813]: I1125 10:33:17.906098 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:17Z","lastTransitionTime":"2025-11-25T10:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.008007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.008057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.008070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.008086 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.008098 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:18Z","lastTransitionTime":"2025-11-25T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.110485 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.110524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.110533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.110547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.110559 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:18Z","lastTransitionTime":"2025-11-25T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.212887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.212959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.212982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.213010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.213030 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:18Z","lastTransitionTime":"2025-11-25T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.316007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.316088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.316108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.316134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.316189 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:18Z","lastTransitionTime":"2025-11-25T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.418894 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.418967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.418991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.419019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.419036 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:18Z","lastTransitionTime":"2025-11-25T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.521596 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.522297 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.522401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.522488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.522569 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:18Z","lastTransitionTime":"2025-11-25T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.620901 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.620943 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:18 crc kubenswrapper[4813]: E1125 10:33:18.621751 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:18 crc kubenswrapper[4813]: E1125 10:33:18.621878 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.634478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.634555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.634574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.634604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.634625 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:18Z","lastTransitionTime":"2025-11-25T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.737478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.737808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.737919 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.738022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.738112 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:18Z","lastTransitionTime":"2025-11-25T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.840500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.841520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.841614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.841727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.841816 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:18Z","lastTransitionTime":"2025-11-25T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.945835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.945903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.945921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.945952 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:18 crc kubenswrapper[4813]: I1125 10:33:18.945981 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:18Z","lastTransitionTime":"2025-11-25T10:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.049068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.049109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.049117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.049132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.049142 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:19Z","lastTransitionTime":"2025-11-25T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.152708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.152802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.152824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.152869 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.152889 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:19Z","lastTransitionTime":"2025-11-25T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.256016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.256059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.256070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.256087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.256101 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:19Z","lastTransitionTime":"2025-11-25T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.358882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.358907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.358915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.358928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.358937 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:19Z","lastTransitionTime":"2025-11-25T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.460998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.461033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.461042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.461056 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.461066 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:19Z","lastTransitionTime":"2025-11-25T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.563299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.563354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.563366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.563386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.563400 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:19Z","lastTransitionTime":"2025-11-25T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.621016 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.621102 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:19 crc kubenswrapper[4813]: E1125 10:33:19.621615 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:19 crc kubenswrapper[4813]: E1125 10:33:19.621751 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.666074 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.666118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.666129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.666144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.666154 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:19Z","lastTransitionTime":"2025-11-25T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.769793 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.769857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.769871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.769888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.769900 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:19Z","lastTransitionTime":"2025-11-25T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.872664 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.872834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.872876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.872908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.872931 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:19Z","lastTransitionTime":"2025-11-25T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.976654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.976777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.976797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.976824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:19 crc kubenswrapper[4813]: I1125 10:33:19.976843 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:19Z","lastTransitionTime":"2025-11-25T10:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.079984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.080031 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.080073 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.080094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.080110 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:20Z","lastTransitionTime":"2025-11-25T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.182579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.182652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.182675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.182745 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.182768 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:20Z","lastTransitionTime":"2025-11-25T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.285813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.285869 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.285886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.285908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.285925 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:20Z","lastTransitionTime":"2025-11-25T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.389738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.389786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.389805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.389829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.389847 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:20Z","lastTransitionTime":"2025-11-25T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.492878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.492921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.492936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.492956 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.492973 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:20Z","lastTransitionTime":"2025-11-25T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.595060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.595204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.595248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.595265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.595276 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:20Z","lastTransitionTime":"2025-11-25T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.621318 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.621328 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:20 crc kubenswrapper[4813]: E1125 10:33:20.621518 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:20 crc kubenswrapper[4813]: E1125 10:33:20.621677 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.697975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.698042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.698065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.698096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.698108 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:20Z","lastTransitionTime":"2025-11-25T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.801274 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.801342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.801354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.801412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.801443 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:20Z","lastTransitionTime":"2025-11-25T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.906587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.907594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.907859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.908044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:20 crc kubenswrapper[4813]: I1125 10:33:20.908396 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:20Z","lastTransitionTime":"2025-11-25T10:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.013429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.013471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.013488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.013510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.013527 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.116666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.116771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.116819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.116849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.116872 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.220113 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.220248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.220305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.220338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.220356 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.323333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.323404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.323427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.323455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.323477 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.426107 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.426391 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.426570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.426788 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.426942 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.530378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.530761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.530905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.531077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.531200 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.621187 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:21 crc kubenswrapper[4813]: E1125 10:33:21.621373 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.621723 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:21 crc kubenswrapper[4813]: E1125 10:33:21.621850 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.634265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.634496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.634719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.634881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.635054 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.641711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.641783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.641806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.641831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.641851 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: E1125 10:33:21.664315 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:21Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.668776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.668827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.668844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.668869 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.668887 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: E1125 10:33:21.687902 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:21Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.692380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.692414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.692424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.692441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.692453 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: E1125 10:33:21.706087 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:21Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.711653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.711770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.711802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.711832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.711852 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: E1125 10:33:21.730343 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:21Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.733518 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.733601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.733616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.733632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.733643 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: E1125 10:33:21.749510 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T10:33:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b8f6803-8c92-44d2-bc35-374b0f00608e\\\",\\\"systemUUID\\\":\\\"85f815b0-dc24-49ca-a7fb-6bc8e198cbb1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:21Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:21 crc kubenswrapper[4813]: E1125 10:33:21.749815 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.752047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.752140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.752161 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.752184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.752201 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.855623 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.855701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.855719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.855741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.855762 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.958127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.958176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.958188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.958204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:21 crc kubenswrapper[4813]: I1125 10:33:21.958222 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:21Z","lastTransitionTime":"2025-11-25T10:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.060747 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.060779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.060790 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.060806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.060818 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:22Z","lastTransitionTime":"2025-11-25T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.163356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.163393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.163404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.163420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.163433 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:22Z","lastTransitionTime":"2025-11-25T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.265824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.265863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.265880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.265899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.265912 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:22Z","lastTransitionTime":"2025-11-25T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.368662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.368738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.368751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.368768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.368781 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:22Z","lastTransitionTime":"2025-11-25T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.472015 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.472060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.472072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.472089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.472105 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:22Z","lastTransitionTime":"2025-11-25T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.575448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.575504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.575529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.575557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.575576 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:22Z","lastTransitionTime":"2025-11-25T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.621153 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.621253 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:22 crc kubenswrapper[4813]: E1125 10:33:22.621304 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:22 crc kubenswrapper[4813]: E1125 10:33:22.621452 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.678439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.678518 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.678536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.678568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.678587 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:22Z","lastTransitionTime":"2025-11-25T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.783071 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.783138 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.783155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.783181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.783201 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:22Z","lastTransitionTime":"2025-11-25T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.886791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.886876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.886903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.886934 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.886957 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:22Z","lastTransitionTime":"2025-11-25T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.990202 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.990242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.990254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.990269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:22 crc kubenswrapper[4813]: I1125 10:33:22.990280 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:22Z","lastTransitionTime":"2025-11-25T10:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.092715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.092777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.092795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.092823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.092842 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:23Z","lastTransitionTime":"2025-11-25T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.195889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.195960 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.195983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.196014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.196048 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:23Z","lastTransitionTime":"2025-11-25T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.298840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.298888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.298900 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.298917 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.298931 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:23Z","lastTransitionTime":"2025-11-25T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.401392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.401445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.401453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.401467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.401483 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:23Z","lastTransitionTime":"2025-11-25T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.505037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.505117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.505147 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.505180 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.505204 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:23Z","lastTransitionTime":"2025-11-25T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.608373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.608446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.608469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.608498 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.608523 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:23Z","lastTransitionTime":"2025-11-25T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.621284 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.621316 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:23 crc kubenswrapper[4813]: E1125 10:33:23.621616 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:23 crc kubenswrapper[4813]: E1125 10:33:23.622586 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.641995 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7391b3f2-dce9-4286-b622-7e7202a042c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b823e81d1130cdb4373ba0b3d00a5f2d0717e34dcf36d2172550263b44e953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa62598abd071ec69894326a022e35c2b383a5d5a1b893b0ecc1e30b8b775ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21dd198f1963287a0866dc0aa9d9854472f833cac0d0146a142a370e236b09f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9ab19e784bbd45e4f4c23288211674ac0d0affbe2736d338967e9237d672760\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.663279 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ebb057ca6152197fa76fc78787533ab8ddaa1e1a096c624e3efc5fcf091332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616fae5157b8d51f903f870d19e7ed40447c3eb954b0e1bd0b3323c27deb59f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.679364 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adac7b8b6297f077adc2d0e402547d19845a4b66a1279e143ba89f014ccdbf15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.696956 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rlpbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98439068-3c89-4c1b-8bb8-8aa848ef0cd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:32:55Z\\\",\\\"message\\\":\\\"2025-11-25T10:32:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27\\\\n2025-11-25T10:32:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_39ca8145-fa4d-4ac0-ba01-62afbe2deb27 to /host/opt/cni/bin/\\\\n2025-11-25T10:32:10Z [verbose] multus-daemon started\\\\n2025-11-25T10:32:10Z [verbose] Readiness Indicator file check\\\\n2025-11-25T10:32:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdxm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlpbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.710741 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qltmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7637b907-3ae7-4b15-a4b9-a0c2217384a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://713975d4e8de4e14484cbd711f5279ddce3acad00571bf052b0ed728bd1a0ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qvsb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qltmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.710927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.710986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.711003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.711027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.711049 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:23Z","lastTransitionTime":"2025-11-25T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.725773 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eccc6bcf-65c9-4741-a1d7-e5545661d3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf35ea2947d355207c657bf7ef54d855cead727db293543efaa653bb03718f6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75f58510a2e937f933fadfec014e5ddff8e6cea4df17e8ade67f4c7af9be7104\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8s86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sbzfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.744820 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"061a2a52-878f-4543-8408-3a7b838f8881\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761ff3f6b4afa8edd4892d9fe727e977fb9700a8c7ab1c149c12bfa6431951c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e593ff2a6412d8dfd3cd96e456f4fe9e2f8b04302d5b9036b828a3cf480b573\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11e2aa9eaa941ade1982256194422becbe3f375508cd507f603a822b10e03134\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.755316 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e26a5c-3d20-48c4-b0aa-e7e7c439a18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71ac844ac0be61d9aa56028670f20db4c9c600feffd4355d9636253b7d50e18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://312228ac2cd8a213ffcac9564ff0abe8b6f330abca932992170d2f6ccea5edb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312228ac2cd8a213ffcac9564ff0abe8b6f330abca932992170d2f6ccea5edb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.755650 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:23 crc kubenswrapper[4813]: E1125 10:33:23.755908 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:33:23 crc kubenswrapper[4813]: E1125 10:33:23.756040 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs podName:74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2 nodeName:}" failed. No retries permitted until 2025-11-25 10:34:27.756012436 +0000 UTC m=+164.885722362 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs") pod "network-metrics-daemon-w28xl" (UID: "74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.776268 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.794785 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.814357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.814449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.814466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.814519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.814536 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:23Z","lastTransitionTime":"2025-11-25T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.815177 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.835310 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2ac9045-f02f-4149-afa5-61da1452d547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbbdce0d7869276078c48cf3c335c37ec3c8f324e76db30e312485508977ed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://792d5ec80cac3667bf3ad534b473ae86eca391f49782cfc0938d789eefd24a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2afd11e5128cad91161f49b1e5d6ac378dbd319773996dbe702bf678a45a4a91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00af788f1e52f5e8adb3f20e61f5fbcfd1090e97a1f24d4ebe926dad23155ae5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156bff53f3008351c3f76a0cc5e9c3eeb4f19a7201392d095bc62012791d9fa5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a98899b475454bf9249b6437439cb15a56278a71678cd2c7a430b4c14ef4022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345ac26e481961ce51e21644b04d31cd5a82c981e9a2355ddd863036cabb4a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgwgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4s9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.858505 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8460ec76-ba89-4f8f-9055-d7274ab52d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T10:33:04Z\\\",\\\"message\\\":\\\"-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 10:33:04.482189 6884 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:33:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svkcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8s5k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.872183 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w28xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4dw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w28xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.887050 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03303956e8d88df49c9c142a7074fa39272a78ea67e868b302d3a663d7f7178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.902764 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86379c39-b839-4552-949c-35431188a3a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:31:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T10:31:57Z\\\",\\\"message\\\":\\\"W1125 10:31:46.900040 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 10:31:46.900557 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764066706 cert, and key in /tmp/serving-cert-1749499007/serving-signer.crt, /tmp/serving-cert-1749499007/serving-signer.key\\\\nI1125 10:31:47.317086 1 observer_polling.go:159] Starting file observer\\\\nW1125 10:31:47.321027 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 10:31:47.321219 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 10:31:47.325062 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1749499007/tls.crt::/tmp/serving-cert-1749499007/tls.key\\\\\\\"\\\\nF1125 10:31:57.761534 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:31:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T10:31:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T10:31:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:31:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.913007 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mmh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bcb41f8-67f5-4a87-8b49-07da054e0c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fbf69eb2f0afb160e40675e9a17e8a9798a3f02de6a2f3aae7a30ef989e5479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtc7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mmh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.916700 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.916727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.916736 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.916774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.916786 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:23Z","lastTransitionTime":"2025-11-25T10:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:23 crc kubenswrapper[4813]: I1125 10:33:23.926730 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ece7e9c-d49a-4348-98ec-bd6ab589f750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T10:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85e2f2d2a870b205f19402a20540fa67104d12d2fcd412ada24c78b0602f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T10:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j55j7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T10:32:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-knhz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T10:33:23Z is after 2025-08-24T17:21:41Z" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.020353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.021134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.021168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.021193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.021206 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:24Z","lastTransitionTime":"2025-11-25T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.123768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.123801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.123814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.123830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.123840 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:24Z","lastTransitionTime":"2025-11-25T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.226374 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.226420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.226435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.226451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.226464 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:24Z","lastTransitionTime":"2025-11-25T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.329445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.329475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.329485 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.329498 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.329507 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:24Z","lastTransitionTime":"2025-11-25T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.432815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.432863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.432879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.432896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.432907 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:24Z","lastTransitionTime":"2025-11-25T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.534914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.534972 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.534990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.535010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.535024 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:24Z","lastTransitionTime":"2025-11-25T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.621238 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:24 crc kubenswrapper[4813]: E1125 10:33:24.621360 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.621717 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:24 crc kubenswrapper[4813]: E1125 10:33:24.621970 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.637801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.637866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.637887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.637913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.637937 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:24Z","lastTransitionTime":"2025-11-25T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.741339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.741506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.741609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.741639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.741655 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:24Z","lastTransitionTime":"2025-11-25T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.845535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.845615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.845638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.845668 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.845739 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:24Z","lastTransitionTime":"2025-11-25T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.949041 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.949102 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.949124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.949153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:24 crc kubenswrapper[4813]: I1125 10:33:24.949174 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:24Z","lastTransitionTime":"2025-11-25T10:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.052241 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.052279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.052286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.052300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.052309 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:25Z","lastTransitionTime":"2025-11-25T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.154984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.155082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.155099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.155123 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.155142 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:25Z","lastTransitionTime":"2025-11-25T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.258263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.258325 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.258343 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.258369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.258388 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:25Z","lastTransitionTime":"2025-11-25T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.360586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.360622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.360633 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.360647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.360657 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:25Z","lastTransitionTime":"2025-11-25T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.464001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.464206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.464282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.464318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.464386 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:25Z","lastTransitionTime":"2025-11-25T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.567812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.567883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.567895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.567912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.567925 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:25Z","lastTransitionTime":"2025-11-25T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.620973 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.621015 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:25 crc kubenswrapper[4813]: E1125 10:33:25.621127 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:25 crc kubenswrapper[4813]: E1125 10:33:25.621309 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.670304 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.670357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.670375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.670406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.670423 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:25Z","lastTransitionTime":"2025-11-25T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.772619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.772719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.772735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.772753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.772767 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:25Z","lastTransitionTime":"2025-11-25T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.875809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.875870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.875897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.875928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.875954 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:25Z","lastTransitionTime":"2025-11-25T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.978305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.978381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.978405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.978438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:25 crc kubenswrapper[4813]: I1125 10:33:25.978461 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:25Z","lastTransitionTime":"2025-11-25T10:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.080564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.080621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.080638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.080656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.080671 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:26Z","lastTransitionTime":"2025-11-25T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.183418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.183470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.183484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.183504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.183519 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:26Z","lastTransitionTime":"2025-11-25T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.286329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.286399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.286421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.286450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.286472 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:26Z","lastTransitionTime":"2025-11-25T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.389900 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.389962 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.389982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.390003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.390020 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:26Z","lastTransitionTime":"2025-11-25T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.494159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.494219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.494232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.494252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.494267 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:26Z","lastTransitionTime":"2025-11-25T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.598339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.598381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.598389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.598405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.598414 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:26Z","lastTransitionTime":"2025-11-25T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.620463 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:26 crc kubenswrapper[4813]: E1125 10:33:26.620559 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.620972 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:26 crc kubenswrapper[4813]: E1125 10:33:26.621037 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.637391 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.700228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.700265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.700282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.700298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.700309 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:26Z","lastTransitionTime":"2025-11-25T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.802453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.802484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.802492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.802512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.802521 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:26Z","lastTransitionTime":"2025-11-25T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.905923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.905953 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.905963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.905977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:26 crc kubenswrapper[4813]: I1125 10:33:26.905986 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:26Z","lastTransitionTime":"2025-11-25T10:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.009671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.009726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.009737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.009753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.009765 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:27Z","lastTransitionTime":"2025-11-25T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.111784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.111818 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.111827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.111843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.111852 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:27Z","lastTransitionTime":"2025-11-25T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.215247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.215279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.215287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.215301 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.215311 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:27Z","lastTransitionTime":"2025-11-25T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.319833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.319939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.319963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.319993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.320017 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:27Z","lastTransitionTime":"2025-11-25T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.422783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.422833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.422850 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.422878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.422896 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:27Z","lastTransitionTime":"2025-11-25T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.525750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.525821 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.525835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.525851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.525862 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:27Z","lastTransitionTime":"2025-11-25T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.621324 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:27 crc kubenswrapper[4813]: E1125 10:33:27.621454 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.621671 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:27 crc kubenswrapper[4813]: E1125 10:33:27.621786 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.628207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.628279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.628298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.628324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.628349 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:27Z","lastTransitionTime":"2025-11-25T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.731290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.731331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.731340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.731356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.731365 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:27Z","lastTransitionTime":"2025-11-25T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.834078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.834384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.834487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.834603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.834787 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:27Z","lastTransitionTime":"2025-11-25T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.937399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.937437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.937446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.937460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:27 crc kubenswrapper[4813]: I1125 10:33:27.937470 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:27Z","lastTransitionTime":"2025-11-25T10:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.040094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.040158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.040172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.040189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.040199 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:28Z","lastTransitionTime":"2025-11-25T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.142722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.142767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.142780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.142795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.142806 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:28Z","lastTransitionTime":"2025-11-25T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.245446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.245480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.245490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.245502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.245511 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:28Z","lastTransitionTime":"2025-11-25T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.348268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.348301 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.348312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.348327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.348338 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:28Z","lastTransitionTime":"2025-11-25T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.450259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.450550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.450636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.450749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.450843 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:28Z","lastTransitionTime":"2025-11-25T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.552925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.552979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.552989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.553008 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.553019 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:28Z","lastTransitionTime":"2025-11-25T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.621239 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.621677 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:28 crc kubenswrapper[4813]: E1125 10:33:28.621825 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:28 crc kubenswrapper[4813]: E1125 10:33:28.622125 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.622451 4813 scope.go:117] "RemoveContainer" containerID="c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b" Nov 25 10:33:28 crc kubenswrapper[4813]: E1125 10:33:28.622572 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.655945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.655978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.655986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.656000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.656009 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:28Z","lastTransitionTime":"2025-11-25T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.758227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.758269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.758278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.758290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.758299 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:28Z","lastTransitionTime":"2025-11-25T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.860849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.860895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.860906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.860921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.860934 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:28Z","lastTransitionTime":"2025-11-25T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.963486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.963521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.963533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.963547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:28 crc kubenswrapper[4813]: I1125 10:33:28.963558 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:28Z","lastTransitionTime":"2025-11-25T10:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.065003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.065114 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.065125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.065141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.065155 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:29Z","lastTransitionTime":"2025-11-25T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.167296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.167342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.167355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.167370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.167379 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:29Z","lastTransitionTime":"2025-11-25T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.269315 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.269352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.269363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.269379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.269391 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:29Z","lastTransitionTime":"2025-11-25T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.371918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.371960 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.371969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.371983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.371994 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:29Z","lastTransitionTime":"2025-11-25T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.474592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.474630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.474639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.474651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.474661 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:29Z","lastTransitionTime":"2025-11-25T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.577110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.577140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.577148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.577177 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.577186 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:29Z","lastTransitionTime":"2025-11-25T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.621106 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.621129 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:29 crc kubenswrapper[4813]: E1125 10:33:29.621214 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:29 crc kubenswrapper[4813]: E1125 10:33:29.621285 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.679559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.679960 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.680089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.680225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.680339 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:29Z","lastTransitionTime":"2025-11-25T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.782865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.782903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.782913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.782927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.782937 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:29Z","lastTransitionTime":"2025-11-25T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.885254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.885318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.885331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.885349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.885361 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:29Z","lastTransitionTime":"2025-11-25T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.987966 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.988012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.988026 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.988046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:29 crc kubenswrapper[4813]: I1125 10:33:29.988061 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:29Z","lastTransitionTime":"2025-11-25T10:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.089815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.089853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.089864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.089877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.089888 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:30Z","lastTransitionTime":"2025-11-25T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.192589 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.192631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.192644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.192666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.192702 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:30Z","lastTransitionTime":"2025-11-25T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.294594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.294637 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.294648 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.294667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.294706 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:30Z","lastTransitionTime":"2025-11-25T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.396752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.396795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.396814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.396833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.396843 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:30Z","lastTransitionTime":"2025-11-25T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.498788 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.498828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.498838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.498855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.498866 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:30Z","lastTransitionTime":"2025-11-25T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.601349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.601387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.601400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.601418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.601431 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:30Z","lastTransitionTime":"2025-11-25T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.621435 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:30 crc kubenswrapper[4813]: E1125 10:33:30.621536 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.621449 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:30 crc kubenswrapper[4813]: E1125 10:33:30.621736 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.703160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.703206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.703215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.703227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.703236 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:30Z","lastTransitionTime":"2025-11-25T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.805122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.805151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.805159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.805190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.805200 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:30Z","lastTransitionTime":"2025-11-25T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.907226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.907355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.907370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.907384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:30 crc kubenswrapper[4813]: I1125 10:33:30.907394 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:30Z","lastTransitionTime":"2025-11-25T10:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.009809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.009870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.009879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.009893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.009903 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:31Z","lastTransitionTime":"2025-11-25T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.112387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.112429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.112439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.112457 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.112469 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:31Z","lastTransitionTime":"2025-11-25T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.214924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.214960 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.214969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.214984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.214994 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:31Z","lastTransitionTime":"2025-11-25T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.317992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.318037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.318050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.318068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.318079 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:31Z","lastTransitionTime":"2025-11-25T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.420313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.420362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.420373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.420391 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.420403 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:31Z","lastTransitionTime":"2025-11-25T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.522560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.522590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.522598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.522612 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.522621 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:31Z","lastTransitionTime":"2025-11-25T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.621415 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.621526 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:31 crc kubenswrapper[4813]: E1125 10:33:31.621650 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:31 crc kubenswrapper[4813]: E1125 10:33:31.621793 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.624745 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.624785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.624796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.624811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.624822 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:31Z","lastTransitionTime":"2025-11-25T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.727501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.727548 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.727560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.727575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.727587 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:31Z","lastTransitionTime":"2025-11-25T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.829865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.829908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.829920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.829934 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.829946 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:31Z","lastTransitionTime":"2025-11-25T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.931670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.931724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.931733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.931744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:31 crc kubenswrapper[4813]: I1125 10:33:31.931753 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:31Z","lastTransitionTime":"2025-11-25T10:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.033285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.033324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.033333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.033348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.033359 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:32Z","lastTransitionTime":"2025-11-25T10:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.113437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.113492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.113505 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.113542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.113552 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T10:33:32Z","lastTransitionTime":"2025-11-25T10:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.157851 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp"] Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.158364 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.161308 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.162368 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.162751 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.165088 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.185805 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=6.185788631 podStartE2EDuration="6.185788631s" podCreationTimestamp="2025-11-25 10:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:32.185624187 +0000 UTC m=+109.315334113" watchObservedRunningTime="2025-11-25 10:33:32.185788631 +0000 UTC m=+109.315498517" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.214160 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.214144478 podStartE2EDuration="1m29.214144478s" podCreationTimestamp="2025-11-25 10:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:32.213884851 +0000 UTC m=+109.343594757" watchObservedRunningTime="2025-11-25 10:33:32.214144478 +0000 UTC m=+109.343854364" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.232758 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mmh87" podStartSLOduration=89.232740817 podStartE2EDuration="1m29.232740817s" podCreationTimestamp="2025-11-25 10:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:32.223217117 +0000 UTC m=+109.352927003" watchObservedRunningTime="2025-11-25 10:33:32.232740817 +0000 UTC m=+109.362450703" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.233126 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podStartSLOduration=88.233120628 podStartE2EDuration="1m28.233120628s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:32.232928233 +0000 UTC m=+109.362638119" watchObservedRunningTime="2025-11-25 10:33:32.233120628 +0000 UTC m=+109.362830534" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.240002 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/215626f5-059a-426d-bbe5-4708acdcd678-service-ca\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.240034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/215626f5-059a-426d-bbe5-4708acdcd678-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.240080 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/215626f5-059a-426d-bbe5-4708acdcd678-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.240101 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/215626f5-059a-426d-bbe5-4708acdcd678-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.240230 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/215626f5-059a-426d-bbe5-4708acdcd678-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.246469 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.246447003 podStartE2EDuration="58.246447003s" podCreationTimestamp="2025-11-25 10:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:32.245209149 +0000 UTC m=+109.374919045" watchObservedRunningTime="2025-11-25 10:33:32.246447003 +0000 UTC m=+109.376156899" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.280577 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rlpbx" podStartSLOduration=88.280559537 podStartE2EDuration="1m28.280559537s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:32.280020752 +0000 UTC m=+109.409730638" watchObservedRunningTime="2025-11-25 10:33:32.280559537 +0000 UTC m=+109.410269423" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.289976 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qltmc" podStartSLOduration=89.289961434 podStartE2EDuration="1m29.289961434s" podCreationTimestamp="2025-11-25 10:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:32.289098101 +0000 UTC m=+109.418807997" watchObservedRunningTime="2025-11-25 10:33:32.289961434 +0000 UTC m=+109.419671320" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.300588 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sbzfj" podStartSLOduration=88.300568685 podStartE2EDuration="1m28.300568685s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:32.30038561 +0000 UTC m=+109.430095506" watchObservedRunningTime="2025-11-25 10:33:32.300568685 +0000 UTC m=+109.430278561" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.324815 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.324799029 podStartE2EDuration="1m25.324799029s" podCreationTimestamp="2025-11-25 10:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:32.324367507 +0000 UTC m=+109.454077413" watchObservedRunningTime="2025-11-25 10:33:32.324799029 +0000 UTC m=+109.454508915" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.339321 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.339305256 podStartE2EDuration="20.339305256s" podCreationTimestamp="2025-11-25 10:33:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:32.339151312 +0000 UTC m=+109.468861208" watchObservedRunningTime="2025-11-25 10:33:32.339305256 +0000 UTC m=+109.469015142" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.340748 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/215626f5-059a-426d-bbe5-4708acdcd678-service-ca\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.340793 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/215626f5-059a-426d-bbe5-4708acdcd678-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.340855 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/215626f5-059a-426d-bbe5-4708acdcd678-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.340878 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/215626f5-059a-426d-bbe5-4708acdcd678-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.340902 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/215626f5-059a-426d-bbe5-4708acdcd678-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.340989 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/215626f5-059a-426d-bbe5-4708acdcd678-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.341056 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/215626f5-059a-426d-bbe5-4708acdcd678-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.341606 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/215626f5-059a-426d-bbe5-4708acdcd678-service-ca\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.347363 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/215626f5-059a-426d-bbe5-4708acdcd678-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.367813 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/215626f5-059a-426d-bbe5-4708acdcd678-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-srvdp\" (UID: \"215626f5-059a-426d-bbe5-4708acdcd678\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.455375 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-4s9w7" podStartSLOduration=88.455357436 podStartE2EDuration="1m28.455357436s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:32.455327185 +0000 UTC m=+109.585037091" watchObservedRunningTime="2025-11-25 10:33:32.455357436 +0000 UTC m=+109.585067332" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.476320 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.620384 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:32 crc kubenswrapper[4813]: I1125 10:33:32.620398 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:32 crc kubenswrapper[4813]: E1125 10:33:32.620976 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:32 crc kubenswrapper[4813]: E1125 10:33:32.621156 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:33 crc kubenswrapper[4813]: I1125 10:33:33.123216 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" event={"ID":"215626f5-059a-426d-bbe5-4708acdcd678","Type":"ContainerStarted","Data":"d8b96ae10f5304e8a4a8b9dc502ee370cae51c1bf660ebb1f16f4e2f5fcf1703"} Nov 25 10:33:33 crc kubenswrapper[4813]: I1125 10:33:33.123272 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" event={"ID":"215626f5-059a-426d-bbe5-4708acdcd678","Type":"ContainerStarted","Data":"d93753907e4779e22282df269318e096453bd42a7957aee84594876d0510a902"} Nov 25 10:33:33 crc kubenswrapper[4813]: I1125 10:33:33.141838 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-srvdp" podStartSLOduration=89.141814992 podStartE2EDuration="1m29.141814992s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:33.13806245 +0000 UTC m=+110.267772356" watchObservedRunningTime="2025-11-25 10:33:33.141814992 +0000 UTC m=+110.271524888" Nov 25 10:33:33 crc kubenswrapper[4813]: I1125 10:33:33.620602 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:33 crc kubenswrapper[4813]: I1125 10:33:33.620707 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:33 crc kubenswrapper[4813]: E1125 10:33:33.621934 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:33 crc kubenswrapper[4813]: E1125 10:33:33.622172 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:34 crc kubenswrapper[4813]: I1125 10:33:34.620715 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:34 crc kubenswrapper[4813]: E1125 10:33:34.620822 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:34 crc kubenswrapper[4813]: I1125 10:33:34.620715 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:34 crc kubenswrapper[4813]: E1125 10:33:34.620929 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:35 crc kubenswrapper[4813]: I1125 10:33:35.620781 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:35 crc kubenswrapper[4813]: E1125 10:33:35.620906 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:35 crc kubenswrapper[4813]: I1125 10:33:35.620966 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:35 crc kubenswrapper[4813]: E1125 10:33:35.621151 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:36 crc kubenswrapper[4813]: I1125 10:33:36.621384 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:36 crc kubenswrapper[4813]: I1125 10:33:36.621394 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:36 crc kubenswrapper[4813]: E1125 10:33:36.621550 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:36 crc kubenswrapper[4813]: E1125 10:33:36.621723 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:37 crc kubenswrapper[4813]: I1125 10:33:37.621135 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:37 crc kubenswrapper[4813]: I1125 10:33:37.621177 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:37 crc kubenswrapper[4813]: E1125 10:33:37.621326 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:37 crc kubenswrapper[4813]: E1125 10:33:37.621539 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:38 crc kubenswrapper[4813]: I1125 10:33:38.620392 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:38 crc kubenswrapper[4813]: I1125 10:33:38.620438 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:38 crc kubenswrapper[4813]: E1125 10:33:38.620532 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:38 crc kubenswrapper[4813]: E1125 10:33:38.620664 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:39 crc kubenswrapper[4813]: I1125 10:33:39.621176 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:39 crc kubenswrapper[4813]: I1125 10:33:39.621176 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:39 crc kubenswrapper[4813]: E1125 10:33:39.621344 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:39 crc kubenswrapper[4813]: E1125 10:33:39.621513 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:39 crc kubenswrapper[4813]: I1125 10:33:39.622396 4813 scope.go:117] "RemoveContainer" containerID="c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b" Nov 25 10:33:39 crc kubenswrapper[4813]: E1125 10:33:39.622652 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8s5k7_openshift-ovn-kubernetes(8460ec76-ba89-4f8f-9055-d7274ab52d11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" Nov 25 10:33:40 crc kubenswrapper[4813]: I1125 10:33:40.621322 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:40 crc kubenswrapper[4813]: E1125 10:33:40.621437 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:40 crc kubenswrapper[4813]: I1125 10:33:40.621646 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:40 crc kubenswrapper[4813]: E1125 10:33:40.621736 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:41 crc kubenswrapper[4813]: I1125 10:33:41.621390 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:41 crc kubenswrapper[4813]: I1125 10:33:41.621446 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:41 crc kubenswrapper[4813]: E1125 10:33:41.621627 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:41 crc kubenswrapper[4813]: E1125 10:33:41.621792 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:42 crc kubenswrapper[4813]: I1125 10:33:42.152629 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlpbx_98439068-3c89-4c1b-8bb8-8aa848ef0cd3/kube-multus/1.log" Nov 25 10:33:42 crc kubenswrapper[4813]: I1125 10:33:42.153530 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlpbx_98439068-3c89-4c1b-8bb8-8aa848ef0cd3/kube-multus/0.log" Nov 25 10:33:42 crc kubenswrapper[4813]: I1125 10:33:42.153793 4813 generic.go:334] "Generic (PLEG): container finished" podID="98439068-3c89-4c1b-8bb8-8aa848ef0cd3" containerID="e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c" exitCode=1 Nov 25 10:33:42 crc kubenswrapper[4813]: I1125 10:33:42.153883 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rlpbx" event={"ID":"98439068-3c89-4c1b-8bb8-8aa848ef0cd3","Type":"ContainerDied","Data":"e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c"} Nov 25 10:33:42 crc kubenswrapper[4813]: I1125 10:33:42.154159 4813 scope.go:117] "RemoveContainer" containerID="73be3b0cabd20c94bd5c69211038398effe8adbb93eda17dbb136f17fa5ba62e" Nov 25 10:33:42 crc kubenswrapper[4813]: I1125 10:33:42.154754 4813 scope.go:117] "RemoveContainer" containerID="e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c" Nov 25 10:33:42 crc kubenswrapper[4813]: E1125 10:33:42.155140 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rlpbx_openshift-multus(98439068-3c89-4c1b-8bb8-8aa848ef0cd3)\"" pod="openshift-multus/multus-rlpbx" podUID="98439068-3c89-4c1b-8bb8-8aa848ef0cd3" Nov 25 10:33:42 crc kubenswrapper[4813]: I1125 10:33:42.621273 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:42 crc kubenswrapper[4813]: I1125 10:33:42.621340 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:42 crc kubenswrapper[4813]: E1125 10:33:42.621432 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:42 crc kubenswrapper[4813]: E1125 10:33:42.621670 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:43 crc kubenswrapper[4813]: I1125 10:33:43.157667 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlpbx_98439068-3c89-4c1b-8bb8-8aa848ef0cd3/kube-multus/1.log" Nov 25 10:33:43 crc kubenswrapper[4813]: I1125 10:33:43.620754 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:43 crc kubenswrapper[4813]: I1125 10:33:43.620824 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:43 crc kubenswrapper[4813]: E1125 10:33:43.622332 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:43 crc kubenswrapper[4813]: E1125 10:33:43.622609 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:43 crc kubenswrapper[4813]: E1125 10:33:43.640452 4813 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 25 10:33:43 crc kubenswrapper[4813]: E1125 10:33:43.722389 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 10:33:44 crc kubenswrapper[4813]: I1125 10:33:44.620958 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:44 crc kubenswrapper[4813]: E1125 10:33:44.621075 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:44 crc kubenswrapper[4813]: I1125 10:33:44.620958 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:44 crc kubenswrapper[4813]: E1125 10:33:44.621333 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:45 crc kubenswrapper[4813]: I1125 10:33:45.620464 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:45 crc kubenswrapper[4813]: I1125 10:33:45.620462 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:45 crc kubenswrapper[4813]: E1125 10:33:45.620581 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:45 crc kubenswrapper[4813]: E1125 10:33:45.620648 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:46 crc kubenswrapper[4813]: I1125 10:33:46.621013 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:46 crc kubenswrapper[4813]: I1125 10:33:46.621111 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:46 crc kubenswrapper[4813]: E1125 10:33:46.621355 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:46 crc kubenswrapper[4813]: E1125 10:33:46.621479 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:47 crc kubenswrapper[4813]: I1125 10:33:47.620720 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:47 crc kubenswrapper[4813]: I1125 10:33:47.620785 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:47 crc kubenswrapper[4813]: E1125 10:33:47.621067 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:47 crc kubenswrapper[4813]: E1125 10:33:47.621206 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:48 crc kubenswrapper[4813]: I1125 10:33:48.621397 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:48 crc kubenswrapper[4813]: I1125 10:33:48.621547 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:48 crc kubenswrapper[4813]: E1125 10:33:48.621627 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:48 crc kubenswrapper[4813]: E1125 10:33:48.621820 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:48 crc kubenswrapper[4813]: E1125 10:33:48.723985 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 10:33:49 crc kubenswrapper[4813]: I1125 10:33:49.620648 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:49 crc kubenswrapper[4813]: I1125 10:33:49.620738 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:49 crc kubenswrapper[4813]: E1125 10:33:49.620808 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:49 crc kubenswrapper[4813]: E1125 10:33:49.620890 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:50 crc kubenswrapper[4813]: I1125 10:33:50.620433 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:50 crc kubenswrapper[4813]: I1125 10:33:50.620515 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:50 crc kubenswrapper[4813]: E1125 10:33:50.620597 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:50 crc kubenswrapper[4813]: E1125 10:33:50.621051 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:50 crc kubenswrapper[4813]: I1125 10:33:50.621225 4813 scope.go:117] "RemoveContainer" containerID="c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b" Nov 25 10:33:51 crc kubenswrapper[4813]: I1125 10:33:51.184842 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/3.log" Nov 25 10:33:51 crc kubenswrapper[4813]: I1125 10:33:51.188206 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerStarted","Data":"0e445d1b17b17b79ca73cab7e0b8c0fde1cee7996193a9b5e3155593909b4a3a"} Nov 25 10:33:51 crc kubenswrapper[4813]: I1125 10:33:51.188734 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:33:51 crc kubenswrapper[4813]: I1125 10:33:51.223805 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podStartSLOduration=107.22376397 podStartE2EDuration="1m47.22376397s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:33:51.22340338 +0000 UTC m=+128.353113276" watchObservedRunningTime="2025-11-25 10:33:51.22376397 +0000 UTC m=+128.353473846" Nov 25 10:33:51 crc kubenswrapper[4813]: I1125 10:33:51.540299 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-w28xl"] Nov 25 10:33:51 crc kubenswrapper[4813]: I1125 10:33:51.540569 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:51 crc kubenswrapper[4813]: E1125 10:33:51.540745 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:51 crc kubenswrapper[4813]: I1125 10:33:51.620883 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:51 crc kubenswrapper[4813]: E1125 10:33:51.621160 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:52 crc kubenswrapper[4813]: I1125 10:33:52.620970 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:52 crc kubenswrapper[4813]: I1125 10:33:52.621042 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:52 crc kubenswrapper[4813]: E1125 10:33:52.621781 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:52 crc kubenswrapper[4813]: E1125 10:33:52.621812 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:53 crc kubenswrapper[4813]: I1125 10:33:53.621217 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:53 crc kubenswrapper[4813]: E1125 10:33:53.622465 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:53 crc kubenswrapper[4813]: I1125 10:33:53.622568 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:53 crc kubenswrapper[4813]: E1125 10:33:53.622753 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:53 crc kubenswrapper[4813]: E1125 10:33:53.725228 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 10:33:54 crc kubenswrapper[4813]: I1125 10:33:54.621065 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:54 crc kubenswrapper[4813]: I1125 10:33:54.621131 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:54 crc kubenswrapper[4813]: E1125 10:33:54.621750 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:54 crc kubenswrapper[4813]: I1125 10:33:54.621864 4813 scope.go:117] "RemoveContainer" containerID="e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c" Nov 25 10:33:54 crc kubenswrapper[4813]: E1125 10:33:54.622152 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:55 crc kubenswrapper[4813]: I1125 10:33:55.206530 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlpbx_98439068-3c89-4c1b-8bb8-8aa848ef0cd3/kube-multus/1.log" Nov 25 10:33:55 crc kubenswrapper[4813]: I1125 10:33:55.206599 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rlpbx" event={"ID":"98439068-3c89-4c1b-8bb8-8aa848ef0cd3","Type":"ContainerStarted","Data":"697fb46d168c6582c121e2351076bc5ac6817cf08da2f08b3927d576bbf35525"} Nov 25 10:33:55 crc kubenswrapper[4813]: I1125 10:33:55.621444 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:55 crc kubenswrapper[4813]: I1125 10:33:55.621511 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:55 crc kubenswrapper[4813]: E1125 10:33:55.621600 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:55 crc kubenswrapper[4813]: E1125 10:33:55.621747 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:56 crc kubenswrapper[4813]: I1125 10:33:56.620761 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:56 crc kubenswrapper[4813]: I1125 10:33:56.620794 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:56 crc kubenswrapper[4813]: E1125 10:33:56.620942 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:56 crc kubenswrapper[4813]: E1125 10:33:56.621077 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:57 crc kubenswrapper[4813]: I1125 10:33:57.621282 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:57 crc kubenswrapper[4813]: I1125 10:33:57.621282 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:57 crc kubenswrapper[4813]: E1125 10:33:57.621720 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w28xl" podUID="74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2" Nov 25 10:33:57 crc kubenswrapper[4813]: E1125 10:33:57.621847 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 10:33:58 crc kubenswrapper[4813]: I1125 10:33:58.620576 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:33:58 crc kubenswrapper[4813]: I1125 10:33:58.620628 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:33:58 crc kubenswrapper[4813]: E1125 10:33:58.620922 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 10:33:58 crc kubenswrapper[4813]: E1125 10:33:58.621109 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 10:33:59 crc kubenswrapper[4813]: I1125 10:33:59.621222 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:33:59 crc kubenswrapper[4813]: I1125 10:33:59.621502 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:33:59 crc kubenswrapper[4813]: I1125 10:33:59.623933 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 10:33:59 crc kubenswrapper[4813]: I1125 10:33:59.623933 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 10:33:59 crc kubenswrapper[4813]: I1125 10:33:59.623996 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 10:33:59 crc kubenswrapper[4813]: I1125 10:33:59.624412 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 10:34:00 crc kubenswrapper[4813]: I1125 10:34:00.621092 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:34:00 crc kubenswrapper[4813]: I1125 10:34:00.621134 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:34:00 crc kubenswrapper[4813]: I1125 10:34:00.623475 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 10:34:00 crc kubenswrapper[4813]: I1125 10:34:00.626222 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.778383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.823927 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.824557 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.824642 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5ngzq"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.825166 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.826838 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.827438 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.827741 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.828420 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.829327 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.834104 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.835009 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.835171 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.835375 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.836234 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.836440 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.836612 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.836813 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.836957 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.837098 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.837236 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.837379 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.837518 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.839802 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.839976 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.840251 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.843623 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.844046 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.844303 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.844474 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.844674 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.844730 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.846145 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.846305 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-d8jnq"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.846789 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.847330 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.847357 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.848389 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.849214 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.855847 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vd4gc"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.859374 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.862743 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.870503 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.883404 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n6d5q"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.883982 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wfr92"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.884230 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.884342 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-48zrm"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.884748 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.885086 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.885235 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.885392 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.885581 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.885870 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.886032 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.886200 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.886356 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.887436 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.889245 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.890814 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.892529 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.893113 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.893826 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-482dq"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.894041 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.895160 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-rpfp2"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.895540 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.896503 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-482dq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.898304 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.898802 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.899428 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.899530 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.899579 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.899720 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.899860 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.899987 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.900118 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.900346 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.900556 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.904109 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.905623 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.905914 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.910028 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.910185 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.910303 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.910466 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.912933 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-frcz9"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.913887 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.914806 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.915270 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.915509 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.916436 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hkxnn"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.917083 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.919225 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dsd6j"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.920065 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wctdv"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.920501 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.920864 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dsd6j" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.921744 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.923043 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-d8jnq"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.928164 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.928631 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5ngzq"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.928741 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.934870 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.937709 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.937871 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.938234 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.938528 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.938725 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.938866 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.939000 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wsrtz"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.939882 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wsrtz" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.940199 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.942779 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.959653 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.960061 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.960978 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.962122 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.966066 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.966537 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.968309 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.968406 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.969035 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.969773 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.970356 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.970596 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.972120 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.972128 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.982898 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65730459-3e56-4cd2-97f4-4e47f60c32c6-config\") pod \"route-controller-manager-6576b87f9c-ckjsl\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.982940 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537ea37d-925a-4ba7-95de-307e69630afb-serving-cert\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.982971 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/609ff7ea-0071-4b93-af38-87f1d04aa886-trusted-ca\") pod \"console-operator-58897d9998-wfr92\" (UID: \"609ff7ea-0071-4b93-af38-87f1d04aa886\") " pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.982994 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983016 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983036 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b543d0c3-b775-4c87-bbd0-016e86361945-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983052 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a70dbef-bca6-47b6-8814-424cc0cbf441-serving-cert\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983136 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b543d0c3-b775-4c87-bbd0-016e86361945-serving-cert\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983175 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609ff7ea-0071-4b93-af38-87f1d04aa886-config\") pod \"console-operator-58897d9998-wfr92\" (UID: \"609ff7ea-0071-4b93-af38-87f1d04aa886\") " pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983215 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a70dbef-bca6-47b6-8814-424cc0cbf441-encryption-config\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983239 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/537ea37d-925a-4ba7-95de-307e69630afb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983260 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b543d0c3-b775-4c87-bbd0-016e86361945-encryption-config\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983281 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd388810-9d8b-4057-942d-7249cf14d38f-config\") pod \"machine-approver-56656f9798-87pc9\" (UID: \"dd388810-9d8b-4057-942d-7249cf14d38f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983375 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqsj\" (UniqueName: \"kubernetes.io/projected/dd388810-9d8b-4057-942d-7249cf14d38f-kube-api-access-nqqsj\") pod \"machine-approver-56656f9798-87pc9\" (UID: \"dd388810-9d8b-4057-942d-7249cf14d38f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983401 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40d84038-a98d-46f7-90b9-b65d9eb09937-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-456g2\" (UID: \"40d84038-a98d-46f7-90b9-b65d9eb09937\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983418 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b543d0c3-b775-4c87-bbd0-016e86361945-audit-dir\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983440 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-audit\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983469 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqxfz\" (UniqueName: \"kubernetes.io/projected/65730459-3e56-4cd2-97f4-4e47f60c32c6-kube-api-access-nqxfz\") pod \"route-controller-manager-6576b87f9c-ckjsl\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983512 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983536 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f94406f9-8434-44b5-b86c-15a9d11c4245-audit-dir\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983560 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-config\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983589 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65730459-3e56-4cd2-97f4-4e47f60c32c6-client-ca\") pod \"route-controller-manager-6576b87f9c-ckjsl\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983623 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-etcd-serving-ca\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983651 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983698 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lt2w\" (UniqueName: \"kubernetes.io/projected/7a70dbef-bca6-47b6-8814-424cc0cbf441-kube-api-access-9lt2w\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983738 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537ea37d-925a-4ba7-95de-307e69630afb-config\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983755 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5qp\" (UniqueName: \"kubernetes.io/projected/537ea37d-925a-4ba7-95de-307e69630afb-kube-api-access-6l5qp\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983775 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsznf\" (UniqueName: \"kubernetes.io/projected/09baf5a6-68d3-4173-ba92-46e36fab8a2e-kube-api-access-hsznf\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983795 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-image-import-ca\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983815 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09baf5a6-68d3-4173-ba92-46e36fab8a2e-serving-cert\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983832 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49lwb\" (UniqueName: \"kubernetes.io/projected/609ff7ea-0071-4b93-af38-87f1d04aa886-kube-api-access-49lwb\") pod \"console-operator-58897d9998-wfr92\" (UID: \"609ff7ea-0071-4b93-af38-87f1d04aa886\") " pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983851 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983872 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983891 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-audit-policies\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983912 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65730459-3e56-4cd2-97f4-4e47f60c32c6-serving-cert\") pod \"route-controller-manager-6576b87f9c-ckjsl\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983928 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7xvj\" (UniqueName: \"kubernetes.io/projected/8804f49f-9764-4368-ab35-dcf4dadfb223-kube-api-access-t7xvj\") pod \"cluster-samples-operator-665b6dd947-7b2wt\" (UID: \"8804f49f-9764-4368-ab35-dcf4dadfb223\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983970 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.983988 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2t5q\" (UniqueName: \"kubernetes.io/projected/f94406f9-8434-44b5-b86c-15a9d11c4245-kube-api-access-s2t5q\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984006 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b543d0c3-b775-4c87-bbd0-016e86361945-etcd-client\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984024 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a70dbef-bca6-47b6-8814-424cc0cbf441-etcd-client\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984055 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/537ea37d-925a-4ba7-95de-307e69630afb-service-ca-bundle\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984081 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609ff7ea-0071-4b93-af38-87f1d04aa886-serving-cert\") pod \"console-operator-58897d9998-wfr92\" (UID: \"609ff7ea-0071-4b93-af38-87f1d04aa886\") " pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984102 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984125 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a70dbef-bca6-47b6-8814-424cc0cbf441-audit-dir\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984162 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzvbw\" (UniqueName: \"kubernetes.io/projected/40d84038-a98d-46f7-90b9-b65d9eb09937-kube-api-access-dzvbw\") pod \"openshift-apiserver-operator-796bbdcf4f-456g2\" (UID: \"40d84038-a98d-46f7-90b9-b65d9eb09937\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984186 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b543d0c3-b775-4c87-bbd0-016e86361945-audit-policies\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984211 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7cdv\" (UniqueName: \"kubernetes.io/projected/b543d0c3-b775-4c87-bbd0-016e86361945-kube-api-access-x7cdv\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984230 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8804f49f-9764-4368-ab35-dcf4dadfb223-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7b2wt\" (UID: \"8804f49f-9764-4368-ab35-dcf4dadfb223\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d84038-a98d-46f7-90b9-b65d9eb09937-config\") pod \"openshift-apiserver-operator-796bbdcf4f-456g2\" (UID: \"40d84038-a98d-46f7-90b9-b65d9eb09937\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984274 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984303 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984908 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a70dbef-bca6-47b6-8814-424cc0cbf441-node-pullsecrets\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984944 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-client-ca\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.984985 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/dd388810-9d8b-4057-942d-7249cf14d38f-machine-approver-tls\") pod \"machine-approver-56656f9798-87pc9\" (UID: \"dd388810-9d8b-4057-942d-7249cf14d38f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.985015 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-config\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.985043 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.985098 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dd388810-9d8b-4057-942d-7249cf14d38f-auth-proxy-config\") pod \"machine-approver-56656f9798-87pc9\" (UID: \"dd388810-9d8b-4057-942d-7249cf14d38f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.985132 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.985158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.985189 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b543d0c3-b775-4c87-bbd0-016e86361945-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.985989 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7"] Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.986695 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" Nov 25 10:34:02 crc kubenswrapper[4813]: I1125 10:34:02.987043 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.020228 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.027097 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hvj2g"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.027709 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.032613 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.033200 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.033558 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.033621 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.036725 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7s8tp"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.036963 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.037181 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.037531 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.037819 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.037884 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.037921 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.037976 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.038275 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.038314 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.038434 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.038457 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.038622 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.038666 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.038814 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.038989 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.039378 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.039516 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.039631 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.039781 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.039907 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.040053 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.040147 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.040175 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.040323 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.040531 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.040594 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.050350 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.051109 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.052985 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.053372 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.054258 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.057491 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.058157 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.058843 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mbr49"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.061341 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.062093 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.063699 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.066758 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.069161 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.071152 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.071710 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.072771 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.074531 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.075619 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.076951 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.076968 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.086258 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n6d5q"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.087403 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lsg8f"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.089475 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.090545 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.091062 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.092793 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537ea37d-925a-4ba7-95de-307e69630afb-config\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.092901 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l5qp\" (UniqueName: \"kubernetes.io/projected/537ea37d-925a-4ba7-95de-307e69630afb-kube-api-access-6l5qp\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.092970 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsznf\" (UniqueName: \"kubernetes.io/projected/09baf5a6-68d3-4173-ba92-46e36fab8a2e-kube-api-access-hsznf\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093032 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-image-import-ca\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093075 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09baf5a6-68d3-4173-ba92-46e36fab8a2e-serving-cert\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093141 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093190 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093213 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49lwb\" (UniqueName: \"kubernetes.io/projected/609ff7ea-0071-4b93-af38-87f1d04aa886-kube-api-access-49lwb\") pod \"console-operator-58897d9998-wfr92\" (UID: \"609ff7ea-0071-4b93-af38-87f1d04aa886\") " pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093284 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-audit-policies\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093311 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65730459-3e56-4cd2-97f4-4e47f60c32c6-serving-cert\") pod \"route-controller-manager-6576b87f9c-ckjsl\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093391 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7xvj\" (UniqueName: \"kubernetes.io/projected/8804f49f-9764-4368-ab35-dcf4dadfb223-kube-api-access-t7xvj\") pod \"cluster-samples-operator-665b6dd947-7b2wt\" (UID: \"8804f49f-9764-4368-ab35-dcf4dadfb223\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093453 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0120e24c-5159-481f-a3d3-e802a58be557-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-w7ltb\" (UID: \"0120e24c-5159-481f-a3d3-e802a58be557\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093476 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0120e24c-5159-481f-a3d3-e802a58be557-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-w7ltb\" (UID: \"0120e24c-5159-481f-a3d3-e802a58be557\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093539 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093791 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2t5q\" (UniqueName: \"kubernetes.io/projected/f94406f9-8434-44b5-b86c-15a9d11c4245-kube-api-access-s2t5q\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.093841 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b543d0c3-b775-4c87-bbd0-016e86361945-etcd-client\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.094069 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gwbx\" (UniqueName: \"kubernetes.io/projected/083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7-kube-api-access-2gwbx\") pod \"ingress-operator-5b745b69d9-kht7r\" (UID: \"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.094201 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/537ea37d-925a-4ba7-95de-307e69630afb-service-ca-bundle\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.094438 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609ff7ea-0071-4b93-af38-87f1d04aa886-serving-cert\") pod \"console-operator-58897d9998-wfr92\" (UID: \"609ff7ea-0071-4b93-af38-87f1d04aa886\") " pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.094479 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a70dbef-bca6-47b6-8814-424cc0cbf441-etcd-client\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.094537 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.094591 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e74616c-72c8-41c2-901e-272c15e94ee7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7hmqn\" (UID: \"8e74616c-72c8-41c2-901e-272c15e94ee7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.094738 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzvbw\" (UniqueName: \"kubernetes.io/projected/40d84038-a98d-46f7-90b9-b65d9eb09937-kube-api-access-dzvbw\") pod \"openshift-apiserver-operator-796bbdcf4f-456g2\" (UID: \"40d84038-a98d-46f7-90b9-b65d9eb09937\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095013 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b543d0c3-b775-4c87-bbd0-016e86361945-audit-policies\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095074 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7cdv\" (UniqueName: \"kubernetes.io/projected/b543d0c3-b775-4c87-bbd0-016e86361945-kube-api-access-x7cdv\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095103 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a70dbef-bca6-47b6-8814-424cc0cbf441-audit-dir\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095153 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8804f49f-9764-4368-ab35-dcf4dadfb223-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7b2wt\" (UID: \"8804f49f-9764-4368-ab35-dcf4dadfb223\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095191 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvd7\" (UniqueName: \"kubernetes.io/projected/8e74616c-72c8-41c2-901e-272c15e94ee7-kube-api-access-qqvd7\") pod \"cluster-image-registry-operator-dc59b4c8b-7hmqn\" (UID: \"8e74616c-72c8-41c2-901e-272c15e94ee7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095230 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d84038-a98d-46f7-90b9-b65d9eb09937-config\") pod \"openshift-apiserver-operator-796bbdcf4f-456g2\" (UID: \"40d84038-a98d-46f7-90b9-b65d9eb09937\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095262 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095289 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a70dbef-bca6-47b6-8814-424cc0cbf441-node-pullsecrets\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095313 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kht7r\" (UID: \"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095335 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e74616c-72c8-41c2-901e-272c15e94ee7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7hmqn\" (UID: \"8e74616c-72c8-41c2-901e-272c15e94ee7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095366 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095388 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-client-ca\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095409 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-audit-policies\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.095416 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e74616c-72c8-41c2-901e-272c15e94ee7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7hmqn\" (UID: \"8e74616c-72c8-41c2-901e-272c15e94ee7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.097464 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-frcz9"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.098285 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/dd388810-9d8b-4057-942d-7249cf14d38f-machine-approver-tls\") pod \"machine-approver-56656f9798-87pc9\" (UID: \"dd388810-9d8b-4057-942d-7249cf14d38f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.098330 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-config\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.098426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dd388810-9d8b-4057-942d-7249cf14d38f-auth-proxy-config\") pod \"machine-approver-56656f9798-87pc9\" (UID: \"dd388810-9d8b-4057-942d-7249cf14d38f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.098518 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.098789 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.098848 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-etcd-client\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.098883 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9b7m\" (UniqueName: \"kubernetes.io/projected/52c2799b-2750-4c0e-8a0b-b1112a7c25f1-kube-api-access-q9b7m\") pod \"openshift-controller-manager-operator-756b6f6bc6-ld2mj\" (UID: \"52c2799b-2750-4c0e-8a0b-b1112a7c25f1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.098914 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x8fl\" (UniqueName: \"kubernetes.io/projected/616a1226-9627-43a9-a1a7-5dfb4cf863d8-kube-api-access-5x8fl\") pod \"machine-api-operator-5694c8668f-48zrm\" (UID: \"616a1226-9627-43a9-a1a7-5dfb4cf863d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.098948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.099805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d84038-a98d-46f7-90b9-b65d9eb09937-config\") pod \"openshift-apiserver-operator-796bbdcf4f-456g2\" (UID: \"40d84038-a98d-46f7-90b9-b65d9eb09937\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.100538 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b543d0c3-b775-4c87-bbd0-016e86361945-audit-policies\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.100774 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-image-import-ca\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.102836 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a70dbef-bca6-47b6-8814-424cc0cbf441-audit-dir\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.102888 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a70dbef-bca6-47b6-8814-424cc0cbf441-node-pullsecrets\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.103451 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.103771 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537ea37d-925a-4ba7-95de-307e69630afb-config\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.104113 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.104524 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/537ea37d-925a-4ba7-95de-307e69630afb-service-ca-bundle\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.105168 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-client-ca\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.106884 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b543d0c3-b775-4c87-bbd0-016e86361945-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.107062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65730459-3e56-4cd2-97f4-4e47f60c32c6-config\") pod \"route-controller-manager-6576b87f9c-ckjsl\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.107133 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg6fs\" (UniqueName: \"kubernetes.io/projected/54ad0590-7880-4467-b980-334b0ea3807c-kube-api-access-jg6fs\") pod \"openshift-config-operator-7777fb866f-frcz9\" (UID: \"54ad0590-7880-4467-b980-334b0ea3807c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.107409 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.107548 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537ea37d-925a-4ba7-95de-307e69630afb-serving-cert\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.107618 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/609ff7ea-0071-4b93-af38-87f1d04aa886-trusted-ca\") pod \"console-operator-58897d9998-wfr92\" (UID: \"609ff7ea-0071-4b93-af38-87f1d04aa886\") " pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.107639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.107723 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.107756 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b543d0c3-b775-4c87-bbd0-016e86361945-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.107781 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a70dbef-bca6-47b6-8814-424cc0cbf441-serving-cert\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.107802 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/54ad0590-7880-4467-b980-334b0ea3807c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-frcz9\" (UID: \"54ad0590-7880-4467-b980-334b0ea3807c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.107891 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/616a1226-9627-43a9-a1a7-5dfb4cf863d8-images\") pod \"machine-api-operator-5694c8668f-48zrm\" (UID: \"616a1226-9627-43a9-a1a7-5dfb4cf863d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.109624 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609ff7ea-0071-4b93-af38-87f1d04aa886-config\") pod \"console-operator-58897d9998-wfr92\" (UID: \"609ff7ea-0071-4b93-af38-87f1d04aa886\") " pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.109671 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b543d0c3-b775-4c87-bbd0-016e86361945-serving-cert\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.112516 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b543d0c3-b775-4c87-bbd0-016e86361945-etcd-client\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.112660 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.114936 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.114931 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b543d0c3-b775-4c87-bbd0-016e86361945-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.115022 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.121419 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.127702 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a70dbef-bca6-47b6-8814-424cc0cbf441-serving-cert\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.127780 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609ff7ea-0071-4b93-af38-87f1d04aa886-serving-cert\") pod \"console-operator-58897d9998-wfr92\" (UID: \"609ff7ea-0071-4b93-af38-87f1d04aa886\") " pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.129263 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8804f49f-9764-4368-ab35-dcf4dadfb223-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7b2wt\" (UID: \"8804f49f-9764-4368-ab35-dcf4dadfb223\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.142992 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65730459-3e56-4cd2-97f4-4e47f60c32c6-serving-cert\") pod \"route-controller-manager-6576b87f9c-ckjsl\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.143164 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.145457 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-config\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.145480 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.147137 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.147142 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dd388810-9d8b-4057-942d-7249cf14d38f-auth-proxy-config\") pod \"machine-approver-56656f9798-87pc9\" (UID: \"dd388810-9d8b-4057-942d-7249cf14d38f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.147348 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b543d0c3-b775-4c87-bbd0-016e86361945-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.147480 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a70dbef-bca6-47b6-8814-424cc0cbf441-etcd-client\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.147752 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09baf5a6-68d3-4173-ba92-46e36fab8a2e-serving-cert\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.147769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.109736 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a70dbef-bca6-47b6-8814-424cc0cbf441-encryption-config\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148221 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b734673f-c958-487f-8871-cf40f8fe8e0b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wsrtz\" (UID: \"b734673f-c958-487f-8871-cf40f8fe8e0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wsrtz" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148347 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/537ea37d-925a-4ba7-95de-307e69630afb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148454 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-etcd-service-ca\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148537 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/609ff7ea-0071-4b93-af38-87f1d04aa886-trusted-ca\") pod \"console-operator-58897d9998-wfr92\" (UID: \"609ff7ea-0071-4b93-af38-87f1d04aa886\") " pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148637 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54ad0590-7880-4467-b980-334b0ea3807c-serving-cert\") pod \"openshift-config-operator-7777fb866f-frcz9\" (UID: \"54ad0590-7880-4467-b980-334b0ea3807c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148751 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609ff7ea-0071-4b93-af38-87f1d04aa886-config\") pod \"console-operator-58897d9998-wfr92\" (UID: \"609ff7ea-0071-4b93-af38-87f1d04aa886\") " pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148765 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86jf4\" (UniqueName: \"kubernetes.io/projected/1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6-kube-api-access-86jf4\") pod \"dns-operator-744455d44c-dsd6j\" (UID: \"1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsd6j" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148824 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537ea37d-925a-4ba7-95de-307e69630afb-serving-cert\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148843 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c2799b-2750-4c0e-8a0b-b1112a7c25f1-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ld2mj\" (UID: \"52c2799b-2750-4c0e-8a0b-b1112a7c25f1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148850 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/dd388810-9d8b-4057-942d-7249cf14d38f-machine-approver-tls\") pod \"machine-approver-56656f9798-87pc9\" (UID: \"dd388810-9d8b-4057-942d-7249cf14d38f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148873 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7-trusted-ca\") pod \"ingress-operator-5b745b69d9-kht7r\" (UID: \"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148905 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0120e24c-5159-481f-a3d3-e802a58be557-config\") pod \"kube-controller-manager-operator-78b949d7b-w7ltb\" (UID: \"0120e24c-5159-481f-a3d3-e802a58be557\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148956 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/616a1226-9627-43a9-a1a7-5dfb4cf863d8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-48zrm\" (UID: \"616a1226-9627-43a9-a1a7-5dfb4cf863d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.148998 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-etcd-ca\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.149084 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b543d0c3-b775-4c87-bbd0-016e86361945-encryption-config\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.149117 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-config\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.149140 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc4sf\" (UniqueName: \"kubernetes.io/projected/a4fc4e54-61da-43ab-934e-5f7ed6178ab6-kube-api-access-kc4sf\") pod \"downloads-7954f5f757-482dq\" (UID: \"a4fc4e54-61da-43ab-934e-5f7ed6178ab6\") " pod="openshift-console/downloads-7954f5f757-482dq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.149164 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd388810-9d8b-4057-942d-7249cf14d38f-config\") pod \"machine-approver-56656f9798-87pc9\" (UID: \"dd388810-9d8b-4057-942d-7249cf14d38f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.149183 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqsj\" (UniqueName: \"kubernetes.io/projected/dd388810-9d8b-4057-942d-7249cf14d38f-kube-api-access-nqqsj\") pod \"machine-approver-56656f9798-87pc9\" (UID: \"dd388810-9d8b-4057-942d-7249cf14d38f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.149413 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40d84038-a98d-46f7-90b9-b65d9eb09937-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-456g2\" (UID: \"40d84038-a98d-46f7-90b9-b65d9eb09937\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.149608 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd388810-9d8b-4057-942d-7249cf14d38f-config\") pod \"machine-approver-56656f9798-87pc9\" (UID: \"dd388810-9d8b-4057-942d-7249cf14d38f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.149636 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cc0ff08-77b5-4ca3-bded-0dd386a5009d-config\") pod \"kube-apiserver-operator-766d6c64bb-2d8r7\" (UID: \"5cc0ff08-77b5-4ca3-bded-0dd386a5009d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150276 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65730459-3e56-4cd2-97f4-4e47f60c32c6-config\") pod \"route-controller-manager-6576b87f9c-ckjsl\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150321 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cc0ff08-77b5-4ca3-bded-0dd386a5009d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2d8r7\" (UID: \"5cc0ff08-77b5-4ca3-bded-0dd386a5009d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150363 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfsrh\" (UniqueName: \"kubernetes.io/projected/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-kube-api-access-rfsrh\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150390 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-serving-cert\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-audit\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150454 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqxfz\" (UniqueName: \"kubernetes.io/projected/65730459-3e56-4cd2-97f4-4e47f60c32c6-kube-api-access-nqxfz\") pod \"route-controller-manager-6576b87f9c-ckjsl\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150485 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrwgm\" (UniqueName: \"kubernetes.io/projected/b734673f-c958-487f-8871-cf40f8fe8e0b-kube-api-access-wrwgm\") pod \"multus-admission-controller-857f4d67dd-wsrtz\" (UID: \"b734673f-c958-487f-8871-cf40f8fe8e0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wsrtz" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150513 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b543d0c3-b775-4c87-bbd0-016e86361945-audit-dir\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150601 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52c2799b-2750-4c0e-8a0b-b1112a7c25f1-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ld2mj\" (UID: \"52c2799b-2750-4c0e-8a0b-b1112a7c25f1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150648 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150702 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7-metrics-tls\") pod \"ingress-operator-5b745b69d9-kht7r\" (UID: \"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150714 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b543d0c3-b775-4c87-bbd0-016e86361945-audit-dir\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150763 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cc0ff08-77b5-4ca3-bded-0dd386a5009d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2d8r7\" (UID: \"5cc0ff08-77b5-4ca3-bded-0dd386a5009d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150868 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/616a1226-9627-43a9-a1a7-5dfb4cf863d8-config\") pod \"machine-api-operator-5694c8668f-48zrm\" (UID: \"616a1226-9627-43a9-a1a7-5dfb4cf863d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150925 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f94406f9-8434-44b5-b86c-15a9d11c4245-audit-dir\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.150957 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-config\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.151019 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-etcd-serving-ca\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.151138 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.151196 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lt2w\" (UniqueName: \"kubernetes.io/projected/7a70dbef-bca6-47b6-8814-424cc0cbf441-kube-api-access-9lt2w\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.151221 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65730459-3e56-4cd2-97f4-4e47f60c32c6-client-ca\") pod \"route-controller-manager-6576b87f9c-ckjsl\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.151329 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-audit\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.151359 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6-metrics-tls\") pod \"dns-operator-744455d44c-dsd6j\" (UID: \"1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsd6j" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.151704 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a70dbef-bca6-47b6-8814-424cc0cbf441-encryption-config\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.151775 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wfr92"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.151811 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f94406f9-8434-44b5-b86c-15a9d11c4245-audit-dir\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.152357 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-etcd-serving-ca\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.152704 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65730459-3e56-4cd2-97f4-4e47f60c32c6-client-ca\") pod \"route-controller-manager-6576b87f9c-ckjsl\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.152733 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a70dbef-bca6-47b6-8814-424cc0cbf441-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.152959 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.153288 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tfjp7"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.153958 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-config\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.154246 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40d84038-a98d-46f7-90b9-b65d9eb09937-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-456g2\" (UID: \"40d84038-a98d-46f7-90b9-b65d9eb09937\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.154450 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tfjp7" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.155510 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.156121 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.157580 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.157581 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b543d0c3-b775-4c87-bbd0-016e86361945-encryption-config\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.157620 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.157620 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.159098 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.164229 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.165087 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.165180 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/537ea37d-925a-4ba7-95de-307e69630afb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.165211 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.165994 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.166397 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.166624 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b543d0c3-b775-4c87-bbd0-016e86361945-serving-cert\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.167340 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.168600 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-482dq"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.169761 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.171731 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hkxnn"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.171968 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wsrtz"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.173826 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wctdv"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.174818 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7s8tp"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.176225 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.177198 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-48zrm"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.178397 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vd4gc"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.179714 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.183363 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.184612 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.185633 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.186741 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.187616 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.187867 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.188887 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dsd6j"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.190040 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lsg8f"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.192300 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.192339 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.193344 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.194416 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rpfp2"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.195406 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.196483 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.197544 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.198745 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-97g57"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.199770 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-97g57" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.200132 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q2vkk"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.202493 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.202523 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mbr49"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.202629 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.203282 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q2vkk"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.204596 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tfjp7"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.206165 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fnjf2"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.207749 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fnjf2"] Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.207886 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fnjf2" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.208361 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.227218 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.252018 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.254188 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cc0ff08-77b5-4ca3-bded-0dd386a5009d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2d8r7\" (UID: \"5cc0ff08-77b5-4ca3-bded-0dd386a5009d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.254266 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/616a1226-9627-43a9-a1a7-5dfb4cf863d8-config\") pod \"machine-api-operator-5694c8668f-48zrm\" (UID: \"616a1226-9627-43a9-a1a7-5dfb4cf863d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.254298 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7-metrics-tls\") pod \"ingress-operator-5b745b69d9-kht7r\" (UID: \"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.254362 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6-metrics-tls\") pod \"dns-operator-744455d44c-dsd6j\" (UID: \"1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsd6j" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.254447 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0120e24c-5159-481f-a3d3-e802a58be557-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-w7ltb\" (UID: \"0120e24c-5159-481f-a3d3-e802a58be557\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.254468 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0120e24c-5159-481f-a3d3-e802a58be557-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-w7ltb\" (UID: \"0120e24c-5159-481f-a3d3-e802a58be557\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.254506 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gwbx\" (UniqueName: \"kubernetes.io/projected/083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7-kube-api-access-2gwbx\") pod \"ingress-operator-5b745b69d9-kht7r\" (UID: \"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.254559 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e74616c-72c8-41c2-901e-272c15e94ee7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7hmqn\" (UID: \"8e74616c-72c8-41c2-901e-272c15e94ee7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.255794 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqvd7\" (UniqueName: \"kubernetes.io/projected/8e74616c-72c8-41c2-901e-272c15e94ee7-kube-api-access-qqvd7\") pod \"cluster-image-registry-operator-dc59b4c8b-7hmqn\" (UID: \"8e74616c-72c8-41c2-901e-272c15e94ee7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.255860 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kht7r\" (UID: \"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.255892 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e74616c-72c8-41c2-901e-272c15e94ee7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7hmqn\" (UID: \"8e74616c-72c8-41c2-901e-272c15e94ee7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.255925 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e74616c-72c8-41c2-901e-272c15e94ee7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7hmqn\" (UID: \"8e74616c-72c8-41c2-901e-272c15e94ee7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.255969 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9b7m\" (UniqueName: \"kubernetes.io/projected/52c2799b-2750-4c0e-8a0b-b1112a7c25f1-kube-api-access-q9b7m\") pod \"openshift-controller-manager-operator-756b6f6bc6-ld2mj\" (UID: \"52c2799b-2750-4c0e-8a0b-b1112a7c25f1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.255995 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x8fl\" (UniqueName: \"kubernetes.io/projected/616a1226-9627-43a9-a1a7-5dfb4cf863d8-kube-api-access-5x8fl\") pod \"machine-api-operator-5694c8668f-48zrm\" (UID: \"616a1226-9627-43a9-a1a7-5dfb4cf863d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256022 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-etcd-client\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256053 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg6fs\" (UniqueName: \"kubernetes.io/projected/54ad0590-7880-4467-b980-334b0ea3807c-kube-api-access-jg6fs\") pod \"openshift-config-operator-7777fb866f-frcz9\" (UID: \"54ad0590-7880-4467-b980-334b0ea3807c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256090 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/54ad0590-7880-4467-b980-334b0ea3807c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-frcz9\" (UID: \"54ad0590-7880-4467-b980-334b0ea3807c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256156 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/616a1226-9627-43a9-a1a7-5dfb4cf863d8-images\") pod \"machine-api-operator-5694c8668f-48zrm\" (UID: \"616a1226-9627-43a9-a1a7-5dfb4cf863d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256226 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b734673f-c958-487f-8871-cf40f8fe8e0b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wsrtz\" (UID: \"b734673f-c958-487f-8871-cf40f8fe8e0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wsrtz" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256256 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-etcd-service-ca\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256281 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54ad0590-7880-4467-b980-334b0ea3807c-serving-cert\") pod \"openshift-config-operator-7777fb866f-frcz9\" (UID: \"54ad0590-7880-4467-b980-334b0ea3807c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256308 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86jf4\" (UniqueName: \"kubernetes.io/projected/1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6-kube-api-access-86jf4\") pod \"dns-operator-744455d44c-dsd6j\" (UID: \"1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsd6j" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256370 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7-trusted-ca\") pod \"ingress-operator-5b745b69d9-kht7r\" (UID: \"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256405 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0120e24c-5159-481f-a3d3-e802a58be557-config\") pod \"kube-controller-manager-operator-78b949d7b-w7ltb\" (UID: \"0120e24c-5159-481f-a3d3-e802a58be557\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256464 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/616a1226-9627-43a9-a1a7-5dfb4cf863d8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-48zrm\" (UID: \"616a1226-9627-43a9-a1a7-5dfb4cf863d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c2799b-2750-4c0e-8a0b-b1112a7c25f1-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ld2mj\" (UID: \"52c2799b-2750-4c0e-8a0b-b1112a7c25f1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-etcd-ca\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256612 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-config\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256643 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc4sf\" (UniqueName: \"kubernetes.io/projected/a4fc4e54-61da-43ab-934e-5f7ed6178ab6-kube-api-access-kc4sf\") pod \"downloads-7954f5f757-482dq\" (UID: \"a4fc4e54-61da-43ab-934e-5f7ed6178ab6\") " pod="openshift-console/downloads-7954f5f757-482dq" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256767 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cc0ff08-77b5-4ca3-bded-0dd386a5009d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2d8r7\" (UID: \"5cc0ff08-77b5-4ca3-bded-0dd386a5009d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256828 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cc0ff08-77b5-4ca3-bded-0dd386a5009d-config\") pod \"kube-apiserver-operator-766d6c64bb-2d8r7\" (UID: \"5cc0ff08-77b5-4ca3-bded-0dd386a5009d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256861 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-serving-cert\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256908 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfsrh\" (UniqueName: \"kubernetes.io/projected/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-kube-api-access-rfsrh\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256942 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrwgm\" (UniqueName: \"kubernetes.io/projected/b734673f-c958-487f-8871-cf40f8fe8e0b-kube-api-access-wrwgm\") pod \"multus-admission-controller-857f4d67dd-wsrtz\" (UID: \"b734673f-c958-487f-8871-cf40f8fe8e0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wsrtz" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.256946 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/54ad0590-7880-4467-b980-334b0ea3807c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-frcz9\" (UID: \"54ad0590-7880-4467-b980-334b0ea3807c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.257002 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52c2799b-2750-4c0e-8a0b-b1112a7c25f1-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ld2mj\" (UID: \"52c2799b-2750-4c0e-8a0b-b1112a7c25f1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.257018 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e74616c-72c8-41c2-901e-272c15e94ee7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7hmqn\" (UID: \"8e74616c-72c8-41c2-901e-272c15e94ee7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.255824 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/616a1226-9627-43a9-a1a7-5dfb4cf863d8-config\") pod \"machine-api-operator-5694c8668f-48zrm\" (UID: \"616a1226-9627-43a9-a1a7-5dfb4cf863d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.258543 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/616a1226-9627-43a9-a1a7-5dfb4cf863d8-images\") pod \"machine-api-operator-5694c8668f-48zrm\" (UID: \"616a1226-9627-43a9-a1a7-5dfb4cf863d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.259285 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c2799b-2750-4c0e-8a0b-b1112a7c25f1-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ld2mj\" (UID: \"52c2799b-2750-4c0e-8a0b-b1112a7c25f1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.259482 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-config\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.259841 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-etcd-ca\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.260013 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-etcd-service-ca\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.260592 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6-metrics-tls\") pod \"dns-operator-744455d44c-dsd6j\" (UID: \"1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsd6j" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.260978 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e74616c-72c8-41c2-901e-272c15e94ee7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7hmqn\" (UID: \"8e74616c-72c8-41c2-901e-272c15e94ee7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.262428 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52c2799b-2750-4c0e-8a0b-b1112a7c25f1-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ld2mj\" (UID: \"52c2799b-2750-4c0e-8a0b-b1112a7c25f1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.262525 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54ad0590-7880-4467-b980-334b0ea3807c-serving-cert\") pod \"openshift-config-operator-7777fb866f-frcz9\" (UID: \"54ad0590-7880-4467-b980-334b0ea3807c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.264075 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/616a1226-9627-43a9-a1a7-5dfb4cf863d8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-48zrm\" (UID: \"616a1226-9627-43a9-a1a7-5dfb4cf863d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.264231 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-serving-cert\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.264429 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-etcd-client\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.267482 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.287416 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.307046 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.328246 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.340067 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0120e24c-5159-481f-a3d3-e802a58be557-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-w7ltb\" (UID: \"0120e24c-5159-481f-a3d3-e802a58be557\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.350491 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.359259 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0120e24c-5159-481f-a3d3-e802a58be557-config\") pod \"kube-controller-manager-operator-78b949d7b-w7ltb\" (UID: \"0120e24c-5159-481f-a3d3-e802a58be557\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.367570 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.372815 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b734673f-c958-487f-8871-cf40f8fe8e0b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wsrtz\" (UID: \"b734673f-c958-487f-8871-cf40f8fe8e0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wsrtz" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.386882 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.410038 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.427730 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.446891 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.467498 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.488302 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.509538 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.528096 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.541998 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cc0ff08-77b5-4ca3-bded-0dd386a5009d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2d8r7\" (UID: \"5cc0ff08-77b5-4ca3-bded-0dd386a5009d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.548398 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.549725 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cc0ff08-77b5-4ca3-bded-0dd386a5009d-config\") pod \"kube-apiserver-operator-766d6c64bb-2d8r7\" (UID: \"5cc0ff08-77b5-4ca3-bded-0dd386a5009d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.568491 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.595801 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.602100 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7-trusted-ca\") pod \"ingress-operator-5b745b69d9-kht7r\" (UID: \"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.608324 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.628195 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.647348 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.661108 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7-metrics-tls\") pod \"ingress-operator-5b745b69d9-kht7r\" (UID: \"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.687525 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.708035 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.726587 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.748419 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.768264 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.788424 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.808028 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.828810 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.847701 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.871864 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.888277 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.908555 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.927102 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.948030 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.967979 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 10:34:03 crc kubenswrapper[4813]: I1125 10:34:03.988426 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.008711 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.027832 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.046620 4813 request.go:700] Waited for 1.005460487s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0 Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.048736 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.075531 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.088049 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.109719 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.128911 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.147783 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.168269 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.207887 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.228173 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.248008 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.267357 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.288179 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.307111 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.327262 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.347044 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.367368 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.386595 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.407246 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.428092 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.448315 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.469186 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.487289 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.507387 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.547993 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzvbw\" (UniqueName: \"kubernetes.io/projected/40d84038-a98d-46f7-90b9-b65d9eb09937-kube-api-access-dzvbw\") pod \"openshift-apiserver-operator-796bbdcf4f-456g2\" (UID: \"40d84038-a98d-46f7-90b9-b65d9eb09937\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.562202 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2t5q\" (UniqueName: \"kubernetes.io/projected/f94406f9-8434-44b5-b86c-15a9d11c4245-kube-api-access-s2t5q\") pod \"oauth-openshift-558db77b4-n6d5q\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.583495 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7xvj\" (UniqueName: \"kubernetes.io/projected/8804f49f-9764-4368-ab35-dcf4dadfb223-kube-api-access-t7xvj\") pod \"cluster-samples-operator-665b6dd947-7b2wt\" (UID: \"8804f49f-9764-4368-ab35-dcf4dadfb223\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.603762 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7cdv\" (UniqueName: \"kubernetes.io/projected/b543d0c3-b775-4c87-bbd0-016e86361945-kube-api-access-x7cdv\") pod \"apiserver-7bbb656c7d-cg7wn\" (UID: \"b543d0c3-b775-4c87-bbd0-016e86361945\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.622206 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l5qp\" (UniqueName: \"kubernetes.io/projected/537ea37d-925a-4ba7-95de-307e69630afb-kube-api-access-6l5qp\") pod \"authentication-operator-69f744f599-d8jnq\" (UID: \"537ea37d-925a-4ba7-95de-307e69630afb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.643144 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsznf\" (UniqueName: \"kubernetes.io/projected/09baf5a6-68d3-4173-ba92-46e36fab8a2e-kube-api-access-hsznf\") pod \"controller-manager-879f6c89f-vd4gc\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.647121 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.661664 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49lwb\" (UniqueName: \"kubernetes.io/projected/609ff7ea-0071-4b93-af38-87f1d04aa886-kube-api-access-49lwb\") pod \"console-operator-58897d9998-wfr92\" (UID: \"609ff7ea-0071-4b93-af38-87f1d04aa886\") " pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.683487 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqsj\" (UniqueName: \"kubernetes.io/projected/dd388810-9d8b-4057-942d-7249cf14d38f-kube-api-access-nqqsj\") pod \"machine-approver-56656f9798-87pc9\" (UID: \"dd388810-9d8b-4057-942d-7249cf14d38f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.702150 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqxfz\" (UniqueName: \"kubernetes.io/projected/65730459-3e56-4cd2-97f4-4e47f60c32c6-kube-api-access-nqxfz\") pod \"route-controller-manager-6576b87f9c-ckjsl\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.702389 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.712267 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.727752 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.728458 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.730315 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lt2w\" (UniqueName: \"kubernetes.io/projected/7a70dbef-bca6-47b6-8814-424cc0cbf441-kube-api-access-9lt2w\") pod \"apiserver-76f77b778f-5ngzq\" (UID: \"7a70dbef-bca6-47b6-8814-424cc0cbf441\") " pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.742594 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.748545 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.767576 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.787260 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.808293 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.822059 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.822295 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.830004 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.830316 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.848146 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.868199 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.894256 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.909912 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.911261 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn"] Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.927418 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.943444 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-d8jnq"] Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.947242 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.965873 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.968013 4813 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.983511 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:04 crc kubenswrapper[4813]: I1125 10:34:04.990196 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.006824 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.019002 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2"] Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.026999 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.042217 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vd4gc"] Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.046913 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.065725 4813 request.go:700] Waited for 1.857480529s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.067631 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.087009 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.131295 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cc0ff08-77b5-4ca3-bded-0dd386a5009d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2d8r7\" (UID: \"5cc0ff08-77b5-4ca3-bded-0dd386a5009d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.148252 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0120e24c-5159-481f-a3d3-e802a58be557-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-w7ltb\" (UID: \"0120e24c-5159-481f-a3d3-e802a58be557\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.181857 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqvd7\" (UniqueName: \"kubernetes.io/projected/8e74616c-72c8-41c2-901e-272c15e94ee7-kube-api-access-qqvd7\") pod \"cluster-image-registry-operator-dc59b4c8b-7hmqn\" (UID: \"8e74616c-72c8-41c2-901e-272c15e94ee7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.195452 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n6d5q"] Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.196647 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kht7r\" (UID: \"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.206938 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gwbx\" (UniqueName: \"kubernetes.io/projected/083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7-kube-api-access-2gwbx\") pod \"ingress-operator-5b745b69d9-kht7r\" (UID: \"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.232487 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9b7m\" (UniqueName: \"kubernetes.io/projected/52c2799b-2750-4c0e-8a0b-b1112a7c25f1-kube-api-access-q9b7m\") pod \"openshift-controller-manager-operator-756b6f6bc6-ld2mj\" (UID: \"52c2799b-2750-4c0e-8a0b-b1112a7c25f1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.247946 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc4sf\" (UniqueName: \"kubernetes.io/projected/a4fc4e54-61da-43ab-934e-5f7ed6178ab6-kube-api-access-kc4sf\") pod \"downloads-7954f5f757-482dq\" (UID: \"a4fc4e54-61da-43ab-934e-5f7ed6178ab6\") " pod="openshift-console/downloads-7954f5f757-482dq" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.251704 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" event={"ID":"dd388810-9d8b-4057-942d-7249cf14d38f","Type":"ContainerStarted","Data":"a974c21d3dc14f92d650264a90ed94b5c6023549d2ffc9bf048960311965f61e"} Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.251752 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" event={"ID":"dd388810-9d8b-4057-942d-7249cf14d38f","Type":"ContainerStarted","Data":"f862bb8432c5a1616f6551531f454153a7896858633daa04c09d4d4c8e0f94e9"} Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.252022 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl"] Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.256809 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" event={"ID":"b543d0c3-b775-4c87-bbd0-016e86361945","Type":"ContainerStarted","Data":"98e5c6044b5d9ae6aa056bac29c524377f135dfdd87453046c984ef40ec422f6"} Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.257636 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" event={"ID":"40d84038-a98d-46f7-90b9-b65d9eb09937","Type":"ContainerStarted","Data":"ac202451a78f95773516ceee2d333a5fd876ac89debedc12ea80c0128975c79d"} Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.258959 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" event={"ID":"09baf5a6-68d3-4173-ba92-46e36fab8a2e","Type":"ContainerStarted","Data":"4efe3c696982dfd928adee122138684909869908e310d502cda10e99fc8f7752"} Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.262322 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" event={"ID":"537ea37d-925a-4ba7-95de-307e69630afb","Type":"ContainerStarted","Data":"42a84321ed5e4396c5449d1483c668e19a6e85437fb73f5d3a2a44731144fc22"} Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.268175 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg6fs\" (UniqueName: \"kubernetes.io/projected/54ad0590-7880-4467-b980-334b0ea3807c-kube-api-access-jg6fs\") pod \"openshift-config-operator-7777fb866f-frcz9\" (UID: \"54ad0590-7880-4467-b980-334b0ea3807c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.289835 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e74616c-72c8-41c2-901e-272c15e94ee7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7hmqn\" (UID: \"8e74616c-72c8-41c2-901e-272c15e94ee7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.307564 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x8fl\" (UniqueName: \"kubernetes.io/projected/616a1226-9627-43a9-a1a7-5dfb4cf863d8-kube-api-access-5x8fl\") pod \"machine-api-operator-5694c8668f-48zrm\" (UID: \"616a1226-9627-43a9-a1a7-5dfb4cf863d8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.308993 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt"] Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.313633 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wfr92"] Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.316897 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.325014 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrwgm\" (UniqueName: \"kubernetes.io/projected/b734673f-c958-487f-8871-cf40f8fe8e0b-kube-api-access-wrwgm\") pod \"multus-admission-controller-857f4d67dd-wsrtz\" (UID: \"b734673f-c958-487f-8871-cf40f8fe8e0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wsrtz" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.339971 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.340471 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86jf4\" (UniqueName: \"kubernetes.io/projected/1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6-kube-api-access-86jf4\") pod \"dns-operator-744455d44c-dsd6j\" (UID: \"1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsd6j" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.341794 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5ngzq"] Nov 25 10:34:05 crc kubenswrapper[4813]: W1125 10:34:05.343815 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod609ff7ea_0071_4b93_af38_87f1d04aa886.slice/crio-cf44628b0996fbfd667abe5f9806625fe08f0fa176d49c0c0ad4a84dcd3cad34 WatchSource:0}: Error finding container cf44628b0996fbfd667abe5f9806625fe08f0fa176d49c0c0ad4a84dcd3cad34: Status 404 returned error can't find the container with id cf44628b0996fbfd667abe5f9806625fe08f0fa176d49c0c0ad4a84dcd3cad34 Nov 25 10:34:05 crc kubenswrapper[4813]: W1125 10:34:05.344850 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65730459_3e56_4cd2_97f4_4e47f60c32c6.slice/crio-d932233fa4edbaba7df745c68809a686100ff75122667baef012452360ed8c19 WatchSource:0}: Error finding container d932233fa4edbaba7df745c68809a686100ff75122667baef012452360ed8c19: Status 404 returned error can't find the container with id d932233fa4edbaba7df745c68809a686100ff75122667baef012452360ed8c19 Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.355445 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.363957 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfsrh\" (UniqueName: \"kubernetes.io/projected/4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72-kube-api-access-rfsrh\") pod \"etcd-operator-b45778765-hkxnn\" (UID: \"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.413469 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-registry-tls\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.413759 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cvt2\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-kube-api-access-6cvt2\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.413889 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/89fdb811-5cae-4ece-a672-207a7af34036-registry-certificates\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.413996 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/89fdb811-5cae-4ece-a672-207a7af34036-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.414107 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89fdb811-5cae-4ece-a672-207a7af34036-trusted-ca\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.414318 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.414491 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-bound-sa-token\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.414612 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/89fdb811-5cae-4ece-a672-207a7af34036-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: E1125 10:34:05.414856 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:05.91484436 +0000 UTC m=+143.044554246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.434061 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.464565 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.472165 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-482dq" Nov 25 10:34:05 crc kubenswrapper[4813]: E1125 10:34:05.520669 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.020652699 +0000 UTC m=+143.150362585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.520599 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.540842 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/670af6e7-f49a-40c1-9f2d-c3df905e9e44-config-volume\") pod \"collect-profiles-29401110-g625d\" (UID: \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.540929 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5580f94-06e3-4a91-b3e8-b1d7962438dd-metrics-certs\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.541274 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5189d915-46f9-4116-b03c-e672fc9a2195-profile-collector-cert\") pod \"olm-operator-6b444d44fb-shxrh\" (UID: \"5189d915-46f9-4116-b03c-e672fc9a2195\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.541533 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/82507459-3471-4865-80e3-92a53d57f352-tmpfs\") pod \"packageserver-d55dfcdfc-8xspn\" (UID: \"82507459-3471-4865-80e3-92a53d57f352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.541558 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdvwq\" (UniqueName: \"kubernetes.io/projected/8bce249f-7fe9-4bf4-abaa-7c8bc254b488-kube-api-access-qdvwq\") pod \"catalog-operator-68c6474976-mpgj4\" (UID: \"8bce249f-7fe9-4bf4-abaa-7c8bc254b488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.541577 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-csi-data-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.541610 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71172c07-4152-40e9-92ee-bee73fb6e3da-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9f6m\" (UID: \"71172c07-4152-40e9-92ee-bee73fb6e3da\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.541737 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn6jg\" (UniqueName: \"kubernetes.io/projected/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-kube-api-access-bn6jg\") pod \"marketplace-operator-79b997595-7s8tp\" (UID: \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\") " pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.541794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw5z7\" (UniqueName: \"kubernetes.io/projected/dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964-kube-api-access-jw5z7\") pod \"control-plane-machine-set-operator-78cbb6b69f-vn7cb\" (UID: \"dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.541838 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/666f7543-9986-44d6-a7e4-dec723ea6a19-config-volume\") pod \"dns-default-lsg8f\" (UID: \"666f7543-9986-44d6-a7e4-dec723ea6a19\") " pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.541861 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51935f3a-e932-43af-b272-01d8d88a1bf3-signing-key\") pod \"service-ca-9c57cc56f-mbr49\" (UID: \"51935f3a-e932-43af-b272-01d8d88a1bf3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.542039 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-667pg\" (UniqueName: \"kubernetes.io/projected/82507459-3471-4865-80e3-92a53d57f352-kube-api-access-667pg\") pod \"packageserver-d55dfcdfc-8xspn\" (UID: \"82507459-3471-4865-80e3-92a53d57f352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.542066 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51935f3a-e932-43af-b272-01d8d88a1bf3-signing-cabundle\") pod \"service-ca-9c57cc56f-mbr49\" (UID: \"51935f3a-e932-43af-b272-01d8d88a1bf3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.542597 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-registry-tls\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.542624 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71172c07-4152-40e9-92ee-bee73fb6e3da-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9f6m\" (UID: \"71172c07-4152-40e9-92ee-bee73fb6e3da\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.542642 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-oauth-serving-cert\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.542705 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p68fn\" (UniqueName: \"kubernetes.io/projected/17571cbf-de36-4b34-af0b-3db7493adaf4-kube-api-access-p68fn\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.542935 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7s8tp\" (UID: \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\") " pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543041 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-console-config\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543081 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/89fdb811-5cae-4ece-a672-207a7af34036-registry-certificates\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543112 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4c4a4714-5f62-440f-ad51-bc55a08ad978-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hf42t\" (UID: \"4c4a4714-5f62-440f-ad51-bc55a08ad978\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543167 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/89fdb811-5cae-4ece-a672-207a7af34036-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543209 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89fdb811-5cae-4ece-a672-207a7af34036-trusted-ca\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543237 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c96548a-a806-471d-9167-c5c58e8323b9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hdv9w\" (UID: \"9c96548a-a806-471d-9167-c5c58e8323b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543265 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c96548a-a806-471d-9167-c5c58e8323b9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hdv9w\" (UID: \"9c96548a-a806-471d-9167-c5c58e8323b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543286 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4c4a4714-5f62-440f-ad51-bc55a08ad978-images\") pod \"machine-config-operator-74547568cd-hf42t\" (UID: \"4c4a4714-5f62-440f-ad51-bc55a08ad978\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543312 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb57k\" (UniqueName: \"kubernetes.io/projected/4c4a4714-5f62-440f-ad51-bc55a08ad978-kube-api-access-gb57k\") pod \"machine-config-operator-74547568cd-hf42t\" (UID: \"4c4a4714-5f62-440f-ad51-bc55a08ad978\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543338 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf38abdc-cb92-4ce5-b0c1-0d5c084aa359-serving-cert\") pod \"service-ca-operator-777779d784-d2ltx\" (UID: \"bf38abdc-cb92-4ce5-b0c1-0d5c084aa359\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543366 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw4n4\" (UniqueName: \"kubernetes.io/projected/b84b3a16-f833-40dc-8356-58bbc7aa3667-kube-api-access-lw4n4\") pod \"machine-config-controller-84d6567774-xlrc4\" (UID: \"b84b3a16-f833-40dc-8356-58bbc7aa3667\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543395 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17571cbf-de36-4b34-af0b-3db7493adaf4-console-oauth-config\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543449 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543494 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtxcd\" (UniqueName: \"kubernetes.io/projected/666f7543-9986-44d6-a7e4-dec723ea6a19-kube-api-access-wtxcd\") pod \"dns-default-lsg8f\" (UID: \"666f7543-9986-44d6-a7e4-dec723ea6a19\") " pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543579 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad-cert\") pod \"ingress-canary-fnjf2\" (UID: \"f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad\") " pod="openshift-ingress-canary/ingress-canary-fnjf2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543613 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswr5\" (UniqueName: \"kubernetes.io/projected/670af6e7-f49a-40c1-9f2d-c3df905e9e44-kube-api-access-fswr5\") pod \"collect-profiles-29401110-g625d\" (UID: \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543639 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vn7cb\" (UID: \"dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543666 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82507459-3471-4865-80e3-92a53d57f352-apiservice-cert\") pod \"packageserver-d55dfcdfc-8xspn\" (UID: \"82507459-3471-4865-80e3-92a53d57f352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543785 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/89fdb811-5cae-4ece-a672-207a7af34036-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543816 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-service-ca\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543840 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q6b9\" (UniqueName: \"kubernetes.io/projected/7a14e28a-ae0c-47e5-b762-cc2f4f191b83-kube-api-access-5q6b9\") pod \"migrator-59844c95c7-tfjp7\" (UID: \"7a14e28a-ae0c-47e5-b762-cc2f4f191b83\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tfjp7" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543934 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7s8tp\" (UID: \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\") " pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543957 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5h8j\" (UniqueName: \"kubernetes.io/projected/51935f3a-e932-43af-b272-01d8d88a1bf3-kube-api-access-b5h8j\") pod \"service-ca-9c57cc56f-mbr49\" (UID: \"51935f3a-e932-43af-b272-01d8d88a1bf3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.543982 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cncvm\" (UniqueName: \"kubernetes.io/projected/3734de4b-3d99-4aa7-bb6e-3dc26e9b687e-kube-api-access-cncvm\") pod \"machine-config-server-97g57\" (UID: \"3734de4b-3d99-4aa7-bb6e-3dc26e9b687e\") " pod="openshift-machine-config-operator/machine-config-server-97g57" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544016 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3734de4b-3d99-4aa7-bb6e-3dc26e9b687e-certs\") pod \"machine-config-server-97g57\" (UID: \"3734de4b-3d99-4aa7-bb6e-3dc26e9b687e\") " pod="openshift-machine-config-operator/machine-config-server-97g57" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544040 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5580f94-06e3-4a91-b3e8-b1d7962438dd-service-ca-bundle\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: E1125 10:34:05.544058 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.04404586 +0000 UTC m=+143.173755746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544111 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb5d2b7c-15cb-4214-bd8c-0f9d144567f7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nklcx\" (UID: \"cb5d2b7c-15cb-4214-bd8c-0f9d144567f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544131 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w24g\" (UniqueName: \"kubernetes.io/projected/f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad-kube-api-access-2w24g\") pod \"ingress-canary-fnjf2\" (UID: \"f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad\") " pod="openshift-ingress-canary/ingress-canary-fnjf2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544166 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17571cbf-de36-4b34-af0b-3db7493adaf4-console-serving-cert\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544195 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3734de4b-3d99-4aa7-bb6e-3dc26e9b687e-node-bootstrap-token\") pod \"machine-config-server-97g57\" (UID: \"3734de4b-3d99-4aa7-bb6e-3dc26e9b687e\") " pod="openshift-machine-config-operator/machine-config-server-97g57" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544211 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8bce249f-7fe9-4bf4-abaa-7c8bc254b488-srv-cert\") pod \"catalog-operator-68c6474976-mpgj4\" (UID: \"8bce249f-7fe9-4bf4-abaa-7c8bc254b488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544224 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-plugins-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544242 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4c4a4714-5f62-440f-ad51-bc55a08ad978-proxy-tls\") pod \"machine-config-operator-74547568cd-hf42t\" (UID: \"4c4a4714-5f62-440f-ad51-bc55a08ad978\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544262 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cvt2\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-kube-api-access-6cvt2\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544278 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzm9l\" (UniqueName: \"kubernetes.io/projected/cb5d2b7c-15cb-4214-bd8c-0f9d144567f7-kube-api-access-mzm9l\") pod \"package-server-manager-789f6589d5-nklcx\" (UID: \"cb5d2b7c-15cb-4214-bd8c-0f9d144567f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544295 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf38abdc-cb92-4ce5-b0c1-0d5c084aa359-config\") pod \"service-ca-operator-777779d784-d2ltx\" (UID: \"bf38abdc-cb92-4ce5-b0c1-0d5c084aa359\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544318 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71172c07-4152-40e9-92ee-bee73fb6e3da-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9f6m\" (UID: \"71172c07-4152-40e9-92ee-bee73fb6e3da\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544433 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89fdb811-5cae-4ece-a672-207a7af34036-trusted-ca\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544529 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/666f7543-9986-44d6-a7e4-dec723ea6a19-metrics-tls\") pod \"dns-default-lsg8f\" (UID: \"666f7543-9986-44d6-a7e4-dec723ea6a19\") " pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544554 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkwrq\" (UniqueName: \"kubernetes.io/projected/9c96548a-a806-471d-9167-c5c58e8323b9-kube-api-access-fkwrq\") pod \"kube-storage-version-migrator-operator-b67b599dd-hdv9w\" (UID: \"9c96548a-a806-471d-9167-c5c58e8323b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544534 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544706 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nldnr\" (UniqueName: \"kubernetes.io/projected/bf38abdc-cb92-4ce5-b0c1-0d5c084aa359-kube-api-access-nldnr\") pod \"service-ca-operator-777779d784-d2ltx\" (UID: \"bf38abdc-cb92-4ce5-b0c1-0d5c084aa359\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544762 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-socket-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544782 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b5580f94-06e3-4a91-b3e8-b1d7962438dd-default-certificate\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544799 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-mountpoint-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544827 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b84b3a16-f833-40dc-8356-58bbc7aa3667-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xlrc4\" (UID: \"b84b3a16-f833-40dc-8356-58bbc7aa3667\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544844 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65sh9\" (UniqueName: \"kubernetes.io/projected/61f4c501-c97d-4a5b-9105-1918dec567a8-kube-api-access-65sh9\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544926 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5189d915-46f9-4116-b03c-e672fc9a2195-srv-cert\") pod \"olm-operator-6b444d44fb-shxrh\" (UID: \"5189d915-46f9-4116-b03c-e672fc9a2195\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.544976 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/670af6e7-f49a-40c1-9f2d-c3df905e9e44-secret-volume\") pod \"collect-profiles-29401110-g625d\" (UID: \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.545022 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b5580f94-06e3-4a91-b3e8-b1d7962438dd-stats-auth\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.545075 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-registration-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.545128 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wgl8\" (UniqueName: \"kubernetes.io/projected/5189d915-46f9-4116-b03c-e672fc9a2195-kube-api-access-8wgl8\") pod \"olm-operator-6b444d44fb-shxrh\" (UID: \"5189d915-46f9-4116-b03c-e672fc9a2195\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.545554 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/89fdb811-5cae-4ece-a672-207a7af34036-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.547433 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/89fdb811-5cae-4ece-a672-207a7af34036-registry-certificates\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.548109 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-bound-sa-token\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.548151 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b84b3a16-f833-40dc-8356-58bbc7aa3667-proxy-tls\") pod \"machine-config-controller-84d6567774-xlrc4\" (UID: \"b84b3a16-f833-40dc-8356-58bbc7aa3667\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.548597 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82507459-3471-4865-80e3-92a53d57f352-webhook-cert\") pod \"packageserver-d55dfcdfc-8xspn\" (UID: \"82507459-3471-4865-80e3-92a53d57f352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.548646 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8bce249f-7fe9-4bf4-abaa-7c8bc254b488-profile-collector-cert\") pod \"catalog-operator-68c6474976-mpgj4\" (UID: \"8bce249f-7fe9-4bf4-abaa-7c8bc254b488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.548720 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-trusted-ca-bundle\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.548828 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpkfx\" (UniqueName: \"kubernetes.io/projected/b5580f94-06e3-4a91-b3e8-b1d7962438dd-kube-api-access-mpkfx\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.553586 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/89fdb811-5cae-4ece-a672-207a7af34036-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.561402 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-registry-tls\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.574177 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.597150 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.598634 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cvt2\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-kube-api-access-6cvt2\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.604708 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-bound-sa-token\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.606334 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dsd6j" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.623737 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wsrtz" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.631175 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb"] Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.649785 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:05 crc kubenswrapper[4813]: E1125 10:34:05.649911 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.14989109 +0000 UTC m=+143.279600986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650001 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3734de4b-3d99-4aa7-bb6e-3dc26e9b687e-certs\") pod \"machine-config-server-97g57\" (UID: \"3734de4b-3d99-4aa7-bb6e-3dc26e9b687e\") " pod="openshift-machine-config-operator/machine-config-server-97g57" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650032 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5580f94-06e3-4a91-b3e8-b1d7962438dd-service-ca-bundle\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650058 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb5d2b7c-15cb-4214-bd8c-0f9d144567f7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nklcx\" (UID: \"cb5d2b7c-15cb-4214-bd8c-0f9d144567f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650083 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w24g\" (UniqueName: \"kubernetes.io/projected/f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad-kube-api-access-2w24g\") pod \"ingress-canary-fnjf2\" (UID: \"f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad\") " pod="openshift-ingress-canary/ingress-canary-fnjf2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650103 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17571cbf-de36-4b34-af0b-3db7493adaf4-console-serving-cert\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3734de4b-3d99-4aa7-bb6e-3dc26e9b687e-node-bootstrap-token\") pod \"machine-config-server-97g57\" (UID: \"3734de4b-3d99-4aa7-bb6e-3dc26e9b687e\") " pod="openshift-machine-config-operator/machine-config-server-97g57" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650150 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8bce249f-7fe9-4bf4-abaa-7c8bc254b488-srv-cert\") pod \"catalog-operator-68c6474976-mpgj4\" (UID: \"8bce249f-7fe9-4bf4-abaa-7c8bc254b488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650170 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4c4a4714-5f62-440f-ad51-bc55a08ad978-proxy-tls\") pod \"machine-config-operator-74547568cd-hf42t\" (UID: \"4c4a4714-5f62-440f-ad51-bc55a08ad978\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650191 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-plugins-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650215 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzm9l\" (UniqueName: \"kubernetes.io/projected/cb5d2b7c-15cb-4214-bd8c-0f9d144567f7-kube-api-access-mzm9l\") pod \"package-server-manager-789f6589d5-nklcx\" (UID: \"cb5d2b7c-15cb-4214-bd8c-0f9d144567f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650234 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf38abdc-cb92-4ce5-b0c1-0d5c084aa359-config\") pod \"service-ca-operator-777779d784-d2ltx\" (UID: \"bf38abdc-cb92-4ce5-b0c1-0d5c084aa359\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650521 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71172c07-4152-40e9-92ee-bee73fb6e3da-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9f6m\" (UID: \"71172c07-4152-40e9-92ee-bee73fb6e3da\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650554 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/666f7543-9986-44d6-a7e4-dec723ea6a19-metrics-tls\") pod \"dns-default-lsg8f\" (UID: \"666f7543-9986-44d6-a7e4-dec723ea6a19\") " pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwrq\" (UniqueName: \"kubernetes.io/projected/9c96548a-a806-471d-9167-c5c58e8323b9-kube-api-access-fkwrq\") pod \"kube-storage-version-migrator-operator-b67b599dd-hdv9w\" (UID: \"9c96548a-a806-471d-9167-c5c58e8323b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650604 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nldnr\" (UniqueName: \"kubernetes.io/projected/bf38abdc-cb92-4ce5-b0c1-0d5c084aa359-kube-api-access-nldnr\") pod \"service-ca-operator-777779d784-d2ltx\" (UID: \"bf38abdc-cb92-4ce5-b0c1-0d5c084aa359\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650628 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-socket-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650650 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b5580f94-06e3-4a91-b3e8-b1d7962438dd-default-certificate\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650670 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-mountpoint-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650711 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b84b3a16-f833-40dc-8356-58bbc7aa3667-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xlrc4\" (UID: \"b84b3a16-f833-40dc-8356-58bbc7aa3667\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650740 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65sh9\" (UniqueName: \"kubernetes.io/projected/61f4c501-c97d-4a5b-9105-1918dec567a8-kube-api-access-65sh9\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650763 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5189d915-46f9-4116-b03c-e672fc9a2195-srv-cert\") pod \"olm-operator-6b444d44fb-shxrh\" (UID: \"5189d915-46f9-4116-b03c-e672fc9a2195\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650796 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/670af6e7-f49a-40c1-9f2d-c3df905e9e44-secret-volume\") pod \"collect-profiles-29401110-g625d\" (UID: \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650817 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b5580f94-06e3-4a91-b3e8-b1d7962438dd-stats-auth\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.650993 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-registration-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.651017 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wgl8\" (UniqueName: \"kubernetes.io/projected/5189d915-46f9-4116-b03c-e672fc9a2195-kube-api-access-8wgl8\") pod \"olm-operator-6b444d44fb-shxrh\" (UID: \"5189d915-46f9-4116-b03c-e672fc9a2195\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.651052 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b84b3a16-f833-40dc-8356-58bbc7aa3667-proxy-tls\") pod \"machine-config-controller-84d6567774-xlrc4\" (UID: \"b84b3a16-f833-40dc-8356-58bbc7aa3667\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.651064 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-plugins-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.651107 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-socket-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.655325 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5580f94-06e3-4a91-b3e8-b1d7962438dd-service-ca-bundle\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.657179 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf38abdc-cb92-4ce5-b0c1-0d5c084aa359-config\") pod \"service-ca-operator-777779d784-d2ltx\" (UID: \"bf38abdc-cb92-4ce5-b0c1-0d5c084aa359\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.659700 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4c4a4714-5f62-440f-ad51-bc55a08ad978-proxy-tls\") pod \"machine-config-operator-74547568cd-hf42t\" (UID: \"4c4a4714-5f62-440f-ad51-bc55a08ad978\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.659773 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/670af6e7-f49a-40c1-9f2d-c3df905e9e44-secret-volume\") pod \"collect-profiles-29401110-g625d\" (UID: \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.651072 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82507459-3471-4865-80e3-92a53d57f352-webhook-cert\") pod \"packageserver-d55dfcdfc-8xspn\" (UID: \"82507459-3471-4865-80e3-92a53d57f352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660203 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b5580f94-06e3-4a91-b3e8-b1d7962438dd-default-certificate\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660227 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8bce249f-7fe9-4bf4-abaa-7c8bc254b488-profile-collector-cert\") pod \"catalog-operator-68c6474976-mpgj4\" (UID: \"8bce249f-7fe9-4bf4-abaa-7c8bc254b488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660296 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-mountpoint-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660298 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-trusted-ca-bundle\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660342 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b84b3a16-f833-40dc-8356-58bbc7aa3667-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xlrc4\" (UID: \"b84b3a16-f833-40dc-8356-58bbc7aa3667\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660354 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpkfx\" (UniqueName: \"kubernetes.io/projected/b5580f94-06e3-4a91-b3e8-b1d7962438dd-kube-api-access-mpkfx\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660385 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5189d915-46f9-4116-b03c-e672fc9a2195-profile-collector-cert\") pod \"olm-operator-6b444d44fb-shxrh\" (UID: \"5189d915-46f9-4116-b03c-e672fc9a2195\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660406 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/670af6e7-f49a-40c1-9f2d-c3df905e9e44-config-volume\") pod \"collect-profiles-29401110-g625d\" (UID: \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660427 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5580f94-06e3-4a91-b3e8-b1d7962438dd-metrics-certs\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660457 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/82507459-3471-4865-80e3-92a53d57f352-tmpfs\") pod \"packageserver-d55dfcdfc-8xspn\" (UID: \"82507459-3471-4865-80e3-92a53d57f352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660484 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71172c07-4152-40e9-92ee-bee73fb6e3da-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9f6m\" (UID: \"71172c07-4152-40e9-92ee-bee73fb6e3da\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660507 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdvwq\" (UniqueName: \"kubernetes.io/projected/8bce249f-7fe9-4bf4-abaa-7c8bc254b488-kube-api-access-qdvwq\") pod \"catalog-operator-68c6474976-mpgj4\" (UID: \"8bce249f-7fe9-4bf4-abaa-7c8bc254b488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660527 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-csi-data-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn6jg\" (UniqueName: \"kubernetes.io/projected/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-kube-api-access-bn6jg\") pod \"marketplace-operator-79b997595-7s8tp\" (UID: \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\") " pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660573 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82507459-3471-4865-80e3-92a53d57f352-webhook-cert\") pod \"packageserver-d55dfcdfc-8xspn\" (UID: \"82507459-3471-4865-80e3-92a53d57f352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660578 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw5z7\" (UniqueName: \"kubernetes.io/projected/dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964-kube-api-access-jw5z7\") pod \"control-plane-machine-set-operator-78cbb6b69f-vn7cb\" (UID: \"dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660630 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8bce249f-7fe9-4bf4-abaa-7c8bc254b488-srv-cert\") pod \"catalog-operator-68c6474976-mpgj4\" (UID: \"8bce249f-7fe9-4bf4-abaa-7c8bc254b488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660735 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-registration-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660638 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/666f7543-9986-44d6-a7e4-dec723ea6a19-config-volume\") pod \"dns-default-lsg8f\" (UID: \"666f7543-9986-44d6-a7e4-dec723ea6a19\") " pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660794 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51935f3a-e932-43af-b272-01d8d88a1bf3-signing-key\") pod \"service-ca-9c57cc56f-mbr49\" (UID: \"51935f3a-e932-43af-b272-01d8d88a1bf3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660831 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-667pg\" (UniqueName: \"kubernetes.io/projected/82507459-3471-4865-80e3-92a53d57f352-kube-api-access-667pg\") pod \"packageserver-d55dfcdfc-8xspn\" (UID: \"82507459-3471-4865-80e3-92a53d57f352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660848 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51935f3a-e932-43af-b272-01d8d88a1bf3-signing-cabundle\") pod \"service-ca-9c57cc56f-mbr49\" (UID: \"51935f3a-e932-43af-b272-01d8d88a1bf3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660872 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71172c07-4152-40e9-92ee-bee73fb6e3da-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9f6m\" (UID: \"71172c07-4152-40e9-92ee-bee73fb6e3da\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660891 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-oauth-serving-cert\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660907 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p68fn\" (UniqueName: \"kubernetes.io/projected/17571cbf-de36-4b34-af0b-3db7493adaf4-kube-api-access-p68fn\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660925 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7s8tp\" (UID: \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\") " pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660946 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-console-config\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660970 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4c4a4714-5f62-440f-ad51-bc55a08ad978-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hf42t\" (UID: \"4c4a4714-5f62-440f-ad51-bc55a08ad978\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.660993 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c96548a-a806-471d-9167-c5c58e8323b9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hdv9w\" (UID: \"9c96548a-a806-471d-9167-c5c58e8323b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661012 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4c4a4714-5f62-440f-ad51-bc55a08ad978-images\") pod \"machine-config-operator-74547568cd-hf42t\" (UID: \"4c4a4714-5f62-440f-ad51-bc55a08ad978\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661033 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb57k\" (UniqueName: \"kubernetes.io/projected/4c4a4714-5f62-440f-ad51-bc55a08ad978-kube-api-access-gb57k\") pod \"machine-config-operator-74547568cd-hf42t\" (UID: \"4c4a4714-5f62-440f-ad51-bc55a08ad978\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661050 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf38abdc-cb92-4ce5-b0c1-0d5c084aa359-serving-cert\") pod \"service-ca-operator-777779d784-d2ltx\" (UID: \"bf38abdc-cb92-4ce5-b0c1-0d5c084aa359\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661067 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c96548a-a806-471d-9167-c5c58e8323b9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hdv9w\" (UID: \"9c96548a-a806-471d-9167-c5c58e8323b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661085 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw4n4\" (UniqueName: \"kubernetes.io/projected/b84b3a16-f833-40dc-8356-58bbc7aa3667-kube-api-access-lw4n4\") pod \"machine-config-controller-84d6567774-xlrc4\" (UID: \"b84b3a16-f833-40dc-8356-58bbc7aa3667\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661106 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17571cbf-de36-4b34-af0b-3db7493adaf4-console-oauth-config\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661132 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661166 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtxcd\" (UniqueName: \"kubernetes.io/projected/666f7543-9986-44d6-a7e4-dec723ea6a19-kube-api-access-wtxcd\") pod \"dns-default-lsg8f\" (UID: \"666f7543-9986-44d6-a7e4-dec723ea6a19\") " pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661208 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad-cert\") pod \"ingress-canary-fnjf2\" (UID: \"f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad\") " pod="openshift-ingress-canary/ingress-canary-fnjf2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661229 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vn7cb\" (UID: \"dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661249 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82507459-3471-4865-80e3-92a53d57f352-apiservice-cert\") pod \"packageserver-d55dfcdfc-8xspn\" (UID: \"82507459-3471-4865-80e3-92a53d57f352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661264 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fswr5\" (UniqueName: \"kubernetes.io/projected/670af6e7-f49a-40c1-9f2d-c3df905e9e44-kube-api-access-fswr5\") pod \"collect-profiles-29401110-g625d\" (UID: \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661291 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-service-ca\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661310 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q6b9\" (UniqueName: \"kubernetes.io/projected/7a14e28a-ae0c-47e5-b762-cc2f4f191b83-kube-api-access-5q6b9\") pod \"migrator-59844c95c7-tfjp7\" (UID: \"7a14e28a-ae0c-47e5-b762-cc2f4f191b83\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tfjp7" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661347 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7s8tp\" (UID: \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\") " pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661371 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5h8j\" (UniqueName: \"kubernetes.io/projected/51935f3a-e932-43af-b272-01d8d88a1bf3-kube-api-access-b5h8j\") pod \"service-ca-9c57cc56f-mbr49\" (UID: \"51935f3a-e932-43af-b272-01d8d88a1bf3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.661396 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cncvm\" (UniqueName: \"kubernetes.io/projected/3734de4b-3d99-4aa7-bb6e-3dc26e9b687e-kube-api-access-cncvm\") pod \"machine-config-server-97g57\" (UID: \"3734de4b-3d99-4aa7-bb6e-3dc26e9b687e\") " pod="openshift-machine-config-operator/machine-config-server-97g57" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.662271 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/61f4c501-c97d-4a5b-9105-1918dec567a8-csi-data-dir\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.664326 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b5580f94-06e3-4a91-b3e8-b1d7962438dd-stats-auth\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.664485 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71172c07-4152-40e9-92ee-bee73fb6e3da-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9f6m\" (UID: \"71172c07-4152-40e9-92ee-bee73fb6e3da\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" Nov 25 10:34:05 crc kubenswrapper[4813]: E1125 10:34:05.664616 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.164599583 +0000 UTC m=+143.294309549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.664935 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3734de4b-3d99-4aa7-bb6e-3dc26e9b687e-node-bootstrap-token\") pod \"machine-config-server-97g57\" (UID: \"3734de4b-3d99-4aa7-bb6e-3dc26e9b687e\") " pod="openshift-machine-config-operator/machine-config-server-97g57" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.666741 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3734de4b-3d99-4aa7-bb6e-3dc26e9b687e-certs\") pod \"machine-config-server-97g57\" (UID: \"3734de4b-3d99-4aa7-bb6e-3dc26e9b687e\") " pod="openshift-machine-config-operator/machine-config-server-97g57" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.667426 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71172c07-4152-40e9-92ee-bee73fb6e3da-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9f6m\" (UID: \"71172c07-4152-40e9-92ee-bee73fb6e3da\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.668789 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/666f7543-9986-44d6-a7e4-dec723ea6a19-metrics-tls\") pod \"dns-default-lsg8f\" (UID: \"666f7543-9986-44d6-a7e4-dec723ea6a19\") " pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.668867 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8bce249f-7fe9-4bf4-abaa-7c8bc254b488-profile-collector-cert\") pod \"catalog-operator-68c6474976-mpgj4\" (UID: \"8bce249f-7fe9-4bf4-abaa-7c8bc254b488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.669344 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5189d915-46f9-4116-b03c-e672fc9a2195-srv-cert\") pod \"olm-operator-6b444d44fb-shxrh\" (UID: \"5189d915-46f9-4116-b03c-e672fc9a2195\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.669495 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/670af6e7-f49a-40c1-9f2d-c3df905e9e44-config-volume\") pod \"collect-profiles-29401110-g625d\" (UID: \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.669716 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51935f3a-e932-43af-b272-01d8d88a1bf3-signing-cabundle\") pod \"service-ca-9c57cc56f-mbr49\" (UID: \"51935f3a-e932-43af-b272-01d8d88a1bf3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.670025 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51935f3a-e932-43af-b272-01d8d88a1bf3-signing-key\") pod \"service-ca-9c57cc56f-mbr49\" (UID: \"51935f3a-e932-43af-b272-01d8d88a1bf3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.670309 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5189d915-46f9-4116-b03c-e672fc9a2195-profile-collector-cert\") pod \"olm-operator-6b444d44fb-shxrh\" (UID: \"5189d915-46f9-4116-b03c-e672fc9a2195\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.670516 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c96548a-a806-471d-9167-c5c58e8323b9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hdv9w\" (UID: \"9c96548a-a806-471d-9167-c5c58e8323b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.671129 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad-cert\") pod \"ingress-canary-fnjf2\" (UID: \"f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad\") " pod="openshift-ingress-canary/ingress-canary-fnjf2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.671198 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-trusted-ca-bundle\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.672047 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c96548a-a806-471d-9167-c5c58e8323b9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hdv9w\" (UID: \"9c96548a-a806-471d-9167-c5c58e8323b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.673074 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/82507459-3471-4865-80e3-92a53d57f352-tmpfs\") pod \"packageserver-d55dfcdfc-8xspn\" (UID: \"82507459-3471-4865-80e3-92a53d57f352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.673484 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4c4a4714-5f62-440f-ad51-bc55a08ad978-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hf42t\" (UID: \"4c4a4714-5f62-440f-ad51-bc55a08ad978\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.673856 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb5d2b7c-15cb-4214-bd8c-0f9d144567f7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nklcx\" (UID: \"cb5d2b7c-15cb-4214-bd8c-0f9d144567f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.673919 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82507459-3471-4865-80e3-92a53d57f352-apiservice-cert\") pod \"packageserver-d55dfcdfc-8xspn\" (UID: \"82507459-3471-4865-80e3-92a53d57f352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.674090 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b84b3a16-f833-40dc-8356-58bbc7aa3667-proxy-tls\") pod \"machine-config-controller-84d6567774-xlrc4\" (UID: \"b84b3a16-f833-40dc-8356-58bbc7aa3667\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.674200 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf38abdc-cb92-4ce5-b0c1-0d5c084aa359-serving-cert\") pod \"service-ca-operator-777779d784-d2ltx\" (UID: \"bf38abdc-cb92-4ce5-b0c1-0d5c084aa359\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.674412 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vn7cb\" (UID: \"dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.674415 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17571cbf-de36-4b34-af0b-3db7493adaf4-console-oauth-config\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.676199 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4c4a4714-5f62-440f-ad51-bc55a08ad978-images\") pod \"machine-config-operator-74547568cd-hf42t\" (UID: \"4c4a4714-5f62-440f-ad51-bc55a08ad978\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.676555 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7s8tp\" (UID: \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\") " pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.676741 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17571cbf-de36-4b34-af0b-3db7493adaf4-console-serving-cert\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.681100 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-service-ca\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.681110 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-oauth-serving-cert\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.681450 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-console-config\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.685277 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/666f7543-9986-44d6-a7e4-dec723ea6a19-config-volume\") pod \"dns-default-lsg8f\" (UID: \"666f7543-9986-44d6-a7e4-dec723ea6a19\") " pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.687806 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5580f94-06e3-4a91-b3e8-b1d7962438dd-metrics-certs\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.687838 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7s8tp\" (UID: \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\") " pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.688184 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w24g\" (UniqueName: \"kubernetes.io/projected/f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad-kube-api-access-2w24g\") pod \"ingress-canary-fnjf2\" (UID: \"f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad\") " pod="openshift-ingress-canary/ingress-canary-fnjf2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.714754 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzm9l\" (UniqueName: \"kubernetes.io/projected/cb5d2b7c-15cb-4214-bd8c-0f9d144567f7-kube-api-access-mzm9l\") pod \"package-server-manager-789f6589d5-nklcx\" (UID: \"cb5d2b7c-15cb-4214-bd8c-0f9d144567f7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.736067 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.765948 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:05 crc kubenswrapper[4813]: E1125 10:34:05.766477 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.266455863 +0000 UTC m=+143.396165749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.784386 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wgl8\" (UniqueName: \"kubernetes.io/projected/5189d915-46f9-4116-b03c-e672fc9a2195-kube-api-access-8wgl8\") pod \"olm-operator-6b444d44fb-shxrh\" (UID: \"5189d915-46f9-4116-b03c-e672fc9a2195\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.787450 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65sh9\" (UniqueName: \"kubernetes.io/projected/61f4c501-c97d-4a5b-9105-1918dec567a8-kube-api-access-65sh9\") pod \"csi-hostpathplugin-q2vkk\" (UID: \"61f4c501-c97d-4a5b-9105-1918dec567a8\") " pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.791769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw5z7\" (UniqueName: \"kubernetes.io/projected/dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964-kube-api-access-jw5z7\") pod \"control-plane-machine-set-operator-78cbb6b69f-vn7cb\" (UID: \"dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.810085 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpkfx\" (UniqueName: \"kubernetes.io/projected/b5580f94-06e3-4a91-b3e8-b1d7962438dd-kube-api-access-mpkfx\") pod \"router-default-5444994796-hvj2g\" (UID: \"b5580f94-06e3-4a91-b3e8-b1d7962438dd\") " pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.811265 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.823705 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fnjf2" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.835083 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwrq\" (UniqueName: \"kubernetes.io/projected/9c96548a-a806-471d-9167-c5c58e8323b9-kube-api-access-fkwrq\") pod \"kube-storage-version-migrator-operator-b67b599dd-hdv9w\" (UID: \"9c96548a-a806-471d-9167-c5c58e8323b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.849191 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cncvm\" (UniqueName: \"kubernetes.io/projected/3734de4b-3d99-4aa7-bb6e-3dc26e9b687e-kube-api-access-cncvm\") pod \"machine-config-server-97g57\" (UID: \"3734de4b-3d99-4aa7-bb6e-3dc26e9b687e\") " pod="openshift-machine-config-operator/machine-config-server-97g57" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.849908 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-482dq"] Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.874068 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:05 crc kubenswrapper[4813]: E1125 10:34:05.874624 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.374608746 +0000 UTC m=+143.504318632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.882142 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7"] Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.885285 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r"] Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.885653 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71172c07-4152-40e9-92ee-bee73fb6e3da-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9f6m\" (UID: \"71172c07-4152-40e9-92ee-bee73fb6e3da\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.893883 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdvwq\" (UniqueName: \"kubernetes.io/projected/8bce249f-7fe9-4bf4-abaa-7c8bc254b488-kube-api-access-qdvwq\") pod \"catalog-operator-68c6474976-mpgj4\" (UID: \"8bce249f-7fe9-4bf4-abaa-7c8bc254b488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.914754 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn6jg\" (UniqueName: \"kubernetes.io/projected/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-kube-api-access-bn6jg\") pod \"marketplace-operator-79b997595-7s8tp\" (UID: \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\") " pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.935232 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.945932 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q6b9\" (UniqueName: \"kubernetes.io/projected/7a14e28a-ae0c-47e5-b762-cc2f4f191b83-kube-api-access-5q6b9\") pod \"migrator-59844c95c7-tfjp7\" (UID: \"7a14e28a-ae0c-47e5-b762-cc2f4f191b83\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tfjp7" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.963908 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nldnr\" (UniqueName: \"kubernetes.io/projected/bf38abdc-cb92-4ce5-b0c1-0d5c084aa359-kube-api-access-nldnr\") pod \"service-ca-operator-777779d784-d2ltx\" (UID: \"bf38abdc-cb92-4ce5-b0c1-0d5c084aa359\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.966768 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5h8j\" (UniqueName: \"kubernetes.io/projected/51935f3a-e932-43af-b272-01d8d88a1bf3-kube-api-access-b5h8j\") pod \"service-ca-9c57cc56f-mbr49\" (UID: \"51935f3a-e932-43af-b272-01d8d88a1bf3\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.968006 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.977889 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.978172 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:05 crc kubenswrapper[4813]: E1125 10:34:05.978745 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.478720139 +0000 UTC m=+143.608430025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.983006 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:05 crc kubenswrapper[4813]: I1125 10:34:05.992311 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.007224 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.014059 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.018361 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtxcd\" (UniqueName: \"kubernetes.io/projected/666f7543-9986-44d6-a7e4-dec723ea6a19-kube-api-access-wtxcd\") pod \"dns-default-lsg8f\" (UID: \"666f7543-9986-44d6-a7e4-dec723ea6a19\") " pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.019296 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fswr5\" (UniqueName: \"kubernetes.io/projected/670af6e7-f49a-40c1-9f2d-c3df905e9e44-kube-api-access-fswr5\") pod \"collect-profiles-29401110-g625d\" (UID: \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.025192 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.034104 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-667pg\" (UniqueName: \"kubernetes.io/projected/82507459-3471-4865-80e3-92a53d57f352-kube-api-access-667pg\") pod \"packageserver-d55dfcdfc-8xspn\" (UID: \"82507459-3471-4865-80e3-92a53d57f352\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.046474 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.046782 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.056368 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tfjp7" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.061387 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw4n4\" (UniqueName: \"kubernetes.io/projected/b84b3a16-f833-40dc-8356-58bbc7aa3667-kube-api-access-lw4n4\") pod \"machine-config-controller-84d6567774-xlrc4\" (UID: \"b84b3a16-f833-40dc-8356-58bbc7aa3667\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.069978 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.070876 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-48zrm"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.079351 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.079987 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:06 crc kubenswrapper[4813]: E1125 10:34:06.079965 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.579954202 +0000 UTC m=+143.709664088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.089157 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-97g57" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.095269 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb57k\" (UniqueName: \"kubernetes.io/projected/4c4a4714-5f62-440f-ad51-bc55a08ad978-kube-api-access-gb57k\") pod \"machine-config-operator-74547568cd-hf42t\" (UID: \"4c4a4714-5f62-440f-ad51-bc55a08ad978\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.104656 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p68fn\" (UniqueName: \"kubernetes.io/projected/17571cbf-de36-4b34-af0b-3db7493adaf4-kube-api-access-p68fn\") pod \"console-f9d7485db-rpfp2\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.162052 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.180626 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:06 crc kubenswrapper[4813]: E1125 10:34:06.181161 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.681137244 +0000 UTC m=+143.810847130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.209021 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hkxnn"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.235424 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q2vkk"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.270118 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj"] Nov 25 10:34:06 crc kubenswrapper[4813]: W1125 10:34:06.274895 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e74616c_72c8_41c2_901e_272c15e94ee7.slice/crio-232e873596b9d8ddb7b8f77f54528003d90c934ec2856889c7c0cfd1d509ca52 WatchSource:0}: Error finding container 232e873596b9d8ddb7b8f77f54528003d90c934ec2856889c7c0cfd1d509ca52: Status 404 returned error can't find the container with id 232e873596b9d8ddb7b8f77f54528003d90c934ec2856889c7c0cfd1d509ca52 Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.279750 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.281986 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:06 crc kubenswrapper[4813]: E1125 10:34:06.282279 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.782266475 +0000 UTC m=+143.911976351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.297806 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" event={"ID":"7a70dbef-bca6-47b6-8814-424cc0cbf441","Type":"ContainerStarted","Data":"3ba1e37f498faf86230a0cf5fe4ebdb7580d00d023cfa34643fc1ed93533f5ad"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.298966 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.305743 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dsd6j"] Nov 25 10:34:06 crc kubenswrapper[4813]: W1125 10:34:06.306225 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61f4c501_c97d_4a5b_9105_1918dec567a8.slice/crio-88326c9893698abb1a1b44d16386acfa8b70c921a04bcaff19e1000349b9974d WatchSource:0}: Error finding container 88326c9893698abb1a1b44d16386acfa8b70c921a04bcaff19e1000349b9974d: Status 404 returned error can't find the container with id 88326c9893698abb1a1b44d16386acfa8b70c921a04bcaff19e1000349b9974d Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.307157 4813 generic.go:334] "Generic (PLEG): container finished" podID="b543d0c3-b775-4c87-bbd0-016e86361945" containerID="3af43f77edea036940177eee047bccb28a86c580b5bd65e54bb5a01458b62f6b" exitCode=0 Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.307341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" event={"ID":"b543d0c3-b775-4c87-bbd0-016e86361945","Type":"ContainerDied","Data":"3af43f77edea036940177eee047bccb28a86c580b5bd65e54bb5a01458b62f6b"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.309552 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.310344 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" event={"ID":"0120e24c-5159-481f-a3d3-e802a58be557","Type":"ContainerStarted","Data":"72c96680e8ad686bf62aaa5fc0101d0c7d1f327f3f66fafca576b424d0d30299"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.311348 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt" event={"ID":"8804f49f-9764-4368-ab35-dcf4dadfb223","Type":"ContainerStarted","Data":"1b390e87b29e35294e3279d879e3283c9fb3eced3970ad964d5ba84164f96373"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.318217 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" event={"ID":"5cc0ff08-77b5-4ca3-bded-0dd386a5009d","Type":"ContainerStarted","Data":"c2c833f893f393679a934cfe94cef6a72ad011a00dbfd6f5bd8e3b8fdc80eb66"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.325103 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-482dq" event={"ID":"a4fc4e54-61da-43ab-934e-5f7ed6178ab6","Type":"ContainerStarted","Data":"c35636df1b083dc55af9fab4068689e645529cd68caa02f8ac2eb994da795be0"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.330087 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-frcz9"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.335608 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wsrtz"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.340448 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" event={"ID":"09baf5a6-68d3-4173-ba92-46e36fab8a2e","Type":"ContainerStarted","Data":"bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.363262 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.363926 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fnjf2"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.365088 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" event={"ID":"f94406f9-8434-44b5-b86c-15a9d11c4245","Type":"ContainerStarted","Data":"acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.365217 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" event={"ID":"f94406f9-8434-44b5-b86c-15a9d11c4245","Type":"ContainerStarted","Data":"20361c962a9cbc39ad1589a52553ac97ca787d25ec8423a674499146c8b5b336"} Nov 25 10:34:06 crc kubenswrapper[4813]: W1125 10:34:06.367086 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71172c07_4152_40e9_92ee_bee73fb6e3da.slice/crio-f0110595c7a622e95ef487ab1a53e12811d79e2ce4b4e1c44998ef317489402c WatchSource:0}: Error finding container f0110595c7a622e95ef487ab1a53e12811d79e2ce4b4e1c44998ef317489402c: Status 404 returned error can't find the container with id f0110595c7a622e95ef487ab1a53e12811d79e2ce4b4e1c44998ef317489402c Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.367168 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wfr92" event={"ID":"609ff7ea-0071-4b93-af38-87f1d04aa886","Type":"ContainerStarted","Data":"cf44628b0996fbfd667abe5f9806625fe08f0fa176d49c0c0ad4a84dcd3cad34"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.372095 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" event={"ID":"40d84038-a98d-46f7-90b9-b65d9eb09937","Type":"ContainerStarted","Data":"c8c4391a93d0f90be8c29e0cc350c8703c091b84101dacc222daee5a26956626"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.373968 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" event={"ID":"616a1226-9627-43a9-a1a7-5dfb4cf863d8","Type":"ContainerStarted","Data":"f819896e80ac4482c3c9beb0ca3d4ca800180588095f65442366c8353c9b1f55"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.375557 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" event={"ID":"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7","Type":"ContainerStarted","Data":"9625a180a403a75e01c64125fb42b12be039ec5e2d3cf1cc586690c2525338bc"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.382757 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:06 crc kubenswrapper[4813]: E1125 10:34:06.382932 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.882904962 +0000 UTC m=+144.012614848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.383040 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:06 crc kubenswrapper[4813]: E1125 10:34:06.383360 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.883350154 +0000 UTC m=+144.013060120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.388202 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.392405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" event={"ID":"65730459-3e56-4cd2-97f4-4e47f60c32c6","Type":"ContainerStarted","Data":"5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.392463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" event={"ID":"65730459-3e56-4cd2-97f4-4e47f60c32c6","Type":"ContainerStarted","Data":"d932233fa4edbaba7df745c68809a686100ff75122667baef012452360ed8c19"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.392808 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.394958 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" event={"ID":"537ea37d-925a-4ba7-95de-307e69630afb","Type":"ContainerStarted","Data":"ea941e48d9ffce4a57cba5b74df4949c7701c8bf342f4c95666214927e8fa6d1"} Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.399404 4813 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ckjsl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.399508 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" podUID="65730459-3e56-4cd2-97f4-4e47f60c32c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.415993 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.483034 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.483708 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:06 crc kubenswrapper[4813]: E1125 10:34:06.485183 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:06.985152004 +0000 UTC m=+144.114861890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:06 crc kubenswrapper[4813]: W1125 10:34:06.518969 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5189d915_46f9_4116_b03c_e672fc9a2195.slice/crio-286769464b41accd21e332e19a814f35e09db809f6a82693dca247d1eece0bc4 WatchSource:0}: Error finding container 286769464b41accd21e332e19a814f35e09db809f6a82693dca247d1eece0bc4: Status 404 returned error can't find the container with id 286769464b41accd21e332e19a814f35e09db809f6a82693dca247d1eece0bc4 Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.592973 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:06 crc kubenswrapper[4813]: E1125 10:34:06.593846 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.093826731 +0000 UTC m=+144.223536617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.642052 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7s8tp"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.681594 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lsg8f"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.700159 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:06 crc kubenswrapper[4813]: E1125 10:34:06.701110 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.20109133 +0000 UTC m=+144.330801216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.774564 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.802641 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:06 crc kubenswrapper[4813]: E1125 10:34:06.803104 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.303094044 +0000 UTC m=+144.432803930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.803450 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.845030 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.850507 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d"] Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.904219 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:06 crc kubenswrapper[4813]: E1125 10:34:06.904623 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.404606155 +0000 UTC m=+144.534316041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:06 crc kubenswrapper[4813]: I1125 10:34:06.993861 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4"] Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.006048 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.006405 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.506394884 +0000 UTC m=+144.636104770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.107270 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.107922 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.607874165 +0000 UTC m=+144.737584071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.108056 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.110046 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.610030844 +0000 UTC m=+144.739740750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.141582 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn"] Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.209797 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.210002 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.709946601 +0000 UTC m=+144.839656487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.210151 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.210470 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.710462335 +0000 UTC m=+144.840172221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.310873 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.311033 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.8110036 +0000 UTC m=+144.940713496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.311101 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.311440 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.811430842 +0000 UTC m=+144.941140728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.349366 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tfjp7"] Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.354356 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mbr49"] Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.356659 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t"] Nov 25 10:34:07 crc kubenswrapper[4813]: W1125 10:34:07.398059 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod670af6e7_f49a_40c1_9f2d_c3df905e9e44.slice/crio-7825d90c139d9e90969bc63d8506802242d57d01f418c5d5fac646c8e071e0fd WatchSource:0}: Error finding container 7825d90c139d9e90969bc63d8506802242d57d01f418c5d5fac646c8e071e0fd: Status 404 returned error can't find the container with id 7825d90c139d9e90969bc63d8506802242d57d01f418c5d5fac646c8e071e0fd Nov 25 10:34:07 crc kubenswrapper[4813]: W1125 10:34:07.400322 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c96548a_a806_471d_9167_c5c58e8323b9.slice/crio-16f75ff057d2412fc6873692504157842ab309bcf65c502159eb43ef3f762f3c WatchSource:0}: Error finding container 16f75ff057d2412fc6873692504157842ab309bcf65c502159eb43ef3f762f3c: Status 404 returned error can't find the container with id 16f75ff057d2412fc6873692504157842ab309bcf65c502159eb43ef3f762f3c Nov 25 10:34:07 crc kubenswrapper[4813]: W1125 10:34:07.404010 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf38abdc_cb92_4ce5_b0c1_0d5c084aa359.slice/crio-513812009a0ee12bd41c137349cf3c5ea04ff179690e37670e20fd59ce33f837 WatchSource:0}: Error finding container 513812009a0ee12bd41c137349cf3c5ea04ff179690e37670e20fd59ce33f837: Status 404 returned error can't find the container with id 513812009a0ee12bd41c137349cf3c5ea04ff179690e37670e20fd59ce33f837 Nov 25 10:34:07 crc kubenswrapper[4813]: W1125 10:34:07.404618 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb84b3a16_f833_40dc_8356_58bbc7aa3667.slice/crio-0ea0d3e975e3484f63f7785bbaeda5463c83c669b47858957580e8e31f01dbe3 WatchSource:0}: Error finding container 0ea0d3e975e3484f63f7785bbaeda5463c83c669b47858957580e8e31f01dbe3: Status 404 returned error can't find the container with id 0ea0d3e975e3484f63f7785bbaeda5463c83c669b47858957580e8e31f01dbe3 Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.412150 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.412196 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.912169382 +0000 UTC m=+145.041879278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.412462 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.412875 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:07.912864151 +0000 UTC m=+145.042574047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.423568 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" event={"ID":"61f4c501-c97d-4a5b-9105-1918dec567a8","Type":"ContainerStarted","Data":"88326c9893698abb1a1b44d16386acfa8b70c921a04bcaff19e1000349b9974d"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.433884 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dsd6j" event={"ID":"1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6","Type":"ContainerStarted","Data":"5cd09bb558c423ab35a229332b437a431056e3a89790f06ced99eb94f9b47738"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.435807 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" event={"ID":"5189d915-46f9-4116-b03c-e672fc9a2195","Type":"ContainerStarted","Data":"286769464b41accd21e332e19a814f35e09db809f6a82693dca247d1eece0bc4"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.441256 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" event={"ID":"52c2799b-2750-4c0e-8a0b-b1112a7c25f1","Type":"ContainerStarted","Data":"0c43d1f4f986aa9cb845e1f07fe19130bce215ef4f1213d662d191d21b19d440"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.447227 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-97g57" event={"ID":"3734de4b-3d99-4aa7-bb6e-3dc26e9b687e","Type":"ContainerStarted","Data":"b9a849118d9c997d0a79061caff594dba96f324c8adb68fcc589148adc80977d"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.448530 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" event={"ID":"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72","Type":"ContainerStarted","Data":"600ae5eb28f6894300ec85429e69b7b89c425174197bc8d36e1eb4b5cc0b1301"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.450939 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" event={"ID":"dd388810-9d8b-4057-942d-7249cf14d38f","Type":"ContainerStarted","Data":"2dbb3d045907ed0e2767a0182250dc3e5e8bdf1bfc70c8eeca631410f80f2565"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.452035 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" event={"ID":"cb5d2b7c-15cb-4214-bd8c-0f9d144567f7","Type":"ContainerStarted","Data":"9a9095f20dd9f2b621ab56bbe6b334e59d63b4ec59e7d6ceff376fce6f071345"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.454305 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" event={"ID":"302f6a62-c67c-48ef-97bc-9b53cdf5f67e","Type":"ContainerStarted","Data":"bb79f31f3c29769689828b72df3bc01da9a448957fdcff837ea94d93edce5bb1"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.457230 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" event={"ID":"54ad0590-7880-4467-b980-334b0ea3807c","Type":"ContainerStarted","Data":"10ea1052a80fb46afac8b9f295921397d28121a5088464557d42242c5bc638d5"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.461869 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lsg8f" event={"ID":"666f7543-9986-44d6-a7e4-dec723ea6a19","Type":"ContainerStarted","Data":"7c432bf64244f1837c1f65b725cc3a42de40d1eb2a2b16a1a51b07aec63eda9c"} Nov 25 10:34:07 crc kubenswrapper[4813]: W1125 10:34:07.461995 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c4a4714_5f62_440f_ad51_bc55a08ad978.slice/crio-898e00842277a8d4ad8fd05db6d00c0f2297025232b5e8191157f14b4bfca009 WatchSource:0}: Error finding container 898e00842277a8d4ad8fd05db6d00c0f2297025232b5e8191157f14b4bfca009: Status 404 returned error can't find the container with id 898e00842277a8d4ad8fd05db6d00c0f2297025232b5e8191157f14b4bfca009 Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.465820 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" podStartSLOduration=123.465799751 podStartE2EDuration="2m3.465799751s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:07.463819967 +0000 UTC m=+144.593529863" watchObservedRunningTime="2025-11-25 10:34:07.465799751 +0000 UTC m=+144.595509637" Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.470086 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" event={"ID":"8bce249f-7fe9-4bf4-abaa-7c8bc254b488","Type":"ContainerStarted","Data":"4bdbee0854da71c9f214f81cfdd41f962830539f3f5d20f3638b38c360c4b84c"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.482210 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" event={"ID":"71172c07-4152-40e9-92ee-bee73fb6e3da","Type":"ContainerStarted","Data":"f0110595c7a622e95ef487ab1a53e12811d79e2ce4b4e1c44998ef317489402c"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.483803 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fnjf2" event={"ID":"f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad","Type":"ContainerStarted","Data":"00c8f4b927066c31b8ca9212d7ab8a9d19c0151bd73ebb1ef3c54fc02a1017dc"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.484800 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" event={"ID":"8e74616c-72c8-41c2-901e-272c15e94ee7","Type":"ContainerStarted","Data":"232e873596b9d8ddb7b8f77f54528003d90c934ec2856889c7c0cfd1d509ca52"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.485649 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hvj2g" event={"ID":"b5580f94-06e3-4a91-b3e8-b1d7962438dd","Type":"ContainerStarted","Data":"3efca921cbb145b53d321f633baad9f5fae382b34a113a89fbd3243771696115"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.487460 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wfr92" event={"ID":"609ff7ea-0071-4b93-af38-87f1d04aa886","Type":"ContainerStarted","Data":"040ad227b5671715487280c01cc4dc91ec0316461eb4f2a1dfe7e88721431c4a"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.489252 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wsrtz" event={"ID":"b734673f-c958-487f-8871-cf40f8fe8e0b","Type":"ContainerStarted","Data":"f87a7db585b6e7ef6f95ffbeb40f9f375a43e7e25996dfc58fdb65492ff2c90b"} Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.489279 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.490530 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.490586 4813 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ckjsl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.490656 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" podUID="65730459-3e56-4cd2-97f4-4e47f60c32c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.498152 4813 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vd4gc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.498400 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" podUID="09baf5a6-68d3-4173-ba92-46e36fab8a2e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.498205 4813 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-n6d5q container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.498625 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" podUID="f94406f9-8434-44b5-b86c-15a9d11c4245" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.513344 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.513470 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.013447797 +0000 UTC m=+145.143157693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.514523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.515184 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.015169104 +0000 UTC m=+145.144878990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.616562 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.617002 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.116973383 +0000 UTC m=+145.246683269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.617220 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.619381 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.119359498 +0000 UTC m=+145.249069564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.703426 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rpfp2"] Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.718762 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.718863 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.218842204 +0000 UTC m=+145.348552100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.719422 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.21941532 +0000 UTC m=+145.349125206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.719180 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.821213 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.821404 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.321382373 +0000 UTC m=+145.451092259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.821666 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.821969 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.321952349 +0000 UTC m=+145.451662345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.845779 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-d8jnq" podStartSLOduration=124.845753381 podStartE2EDuration="2m4.845753381s" podCreationTimestamp="2025-11-25 10:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:07.839047747 +0000 UTC m=+144.968757643" watchObservedRunningTime="2025-11-25 10:34:07.845753381 +0000 UTC m=+144.975463277" Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.881552 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" podStartSLOduration=123.881529621 podStartE2EDuration="2m3.881529621s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:07.875938158 +0000 UTC m=+145.005648064" watchObservedRunningTime="2025-11-25 10:34:07.881529621 +0000 UTC m=+145.011239527" Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.921588 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-456g2" podStartSLOduration=124.921566148 podStartE2EDuration="2m4.921566148s" podCreationTimestamp="2025-11-25 10:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:07.91760859 +0000 UTC m=+145.047318476" watchObservedRunningTime="2025-11-25 10:34:07.921566148 +0000 UTC m=+145.051276054" Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.923409 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:07 crc kubenswrapper[4813]: E1125 10:34:07.923833 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.42381651 +0000 UTC m=+145.553526406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:07 crc kubenswrapper[4813]: I1125 10:34:07.961377 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" podStartSLOduration=124.961363049 podStartE2EDuration="2m4.961363049s" podCreationTimestamp="2025-11-25 10:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:07.960465954 +0000 UTC m=+145.090175860" watchObservedRunningTime="2025-11-25 10:34:07.961363049 +0000 UTC m=+145.091072925" Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.024900 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.025238 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.525221578 +0000 UTC m=+145.654931464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.126037 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.126205 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.626186293 +0000 UTC m=+145.755896179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.126396 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.126978 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.626968655 +0000 UTC m=+145.756678541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.228150 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.228376 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.728341782 +0000 UTC m=+145.858051668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.228698 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.229113 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.729100173 +0000 UTC m=+145.858810059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.329869 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.330154 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.83010947 +0000 UTC m=+145.959819356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.330498 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.330900 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.830883891 +0000 UTC m=+145.960593978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.431663 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.431966 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.93193349 +0000 UTC m=+146.061643376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.432156 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.432641 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:08.932629409 +0000 UTC m=+146.062339295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.499159 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" event={"ID":"b84b3a16-f833-40dc-8356-58bbc7aa3667","Type":"ContainerStarted","Data":"0ea0d3e975e3484f63f7785bbaeda5463c83c669b47858957580e8e31f01dbe3"} Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.501000 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tfjp7" event={"ID":"7a14e28a-ae0c-47e5-b762-cc2f4f191b83","Type":"ContainerStarted","Data":"b73ba9d77a38e5364342ae880714ab8b8df12dcdf83d1ed7fe15131bef30de14"} Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.502494 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" event={"ID":"51935f3a-e932-43af-b272-01d8d88a1bf3","Type":"ContainerStarted","Data":"d76558fd671921e31bc226f5761e0d05245087760cedcff05235a18507ddd24c"} Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.504136 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt" event={"ID":"8804f49f-9764-4368-ab35-dcf4dadfb223","Type":"ContainerStarted","Data":"95def15c6e2f87f1238b1ea902797c6f269f7a347c77a834546cad1e7d484c50"} Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.508943 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" event={"ID":"9c96548a-a806-471d-9167-c5c58e8323b9","Type":"ContainerStarted","Data":"16f75ff057d2412fc6873692504157842ab309bcf65c502159eb43ef3f762f3c"} Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.510567 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb" event={"ID":"dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964","Type":"ContainerStarted","Data":"127690c58bb29c6b7338b92af673e127c6a384d5be722d8eaa0fed8ff43bc7a8"} Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.511620 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" event={"ID":"670af6e7-f49a-40c1-9f2d-c3df905e9e44","Type":"ContainerStarted","Data":"7825d90c139d9e90969bc63d8506802242d57d01f418c5d5fac646c8e071e0fd"} Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.512991 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rpfp2" event={"ID":"17571cbf-de36-4b34-af0b-3db7493adaf4","Type":"ContainerStarted","Data":"4d47be0aed0c302547e18e8ddfd27cf1643aabc2810b91a7b34a27127b04dddb"} Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.513855 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" event={"ID":"82507459-3471-4865-80e3-92a53d57f352","Type":"ContainerStarted","Data":"4a3a9f471e36f36a6076fb7d1f4fad8734bf57f3958084eb187037522ddd7e6a"} Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.516118 4813 generic.go:334] "Generic (PLEG): container finished" podID="7a70dbef-bca6-47b6-8814-424cc0cbf441" containerID="c597c22e6bf24ede5735c45a797770f2548f0cca61b5f046af674acd2cf883b0" exitCode=0 Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.516185 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" event={"ID":"7a70dbef-bca6-47b6-8814-424cc0cbf441","Type":"ContainerDied","Data":"c597c22e6bf24ede5735c45a797770f2548f0cca61b5f046af674acd2cf883b0"} Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.517849 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" event={"ID":"4c4a4714-5f62-440f-ad51-bc55a08ad978","Type":"ContainerStarted","Data":"898e00842277a8d4ad8fd05db6d00c0f2297025232b5e8191157f14b4bfca009"} Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.518816 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" event={"ID":"bf38abdc-cb92-4ce5-b0c1-0d5c084aa359","Type":"ContainerStarted","Data":"513812009a0ee12bd41c137349cf3c5ea04ff179690e37670e20fd59ce33f837"} Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.519666 4813 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vd4gc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.519762 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" podUID="09baf5a6-68d3-4173-ba92-46e36fab8a2e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.520010 4813 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-n6d5q container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.520166 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" podUID="f94406f9-8434-44b5-b86c-15a9d11c4245" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.533725 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.534470 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.034451319 +0000 UTC m=+146.164161205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.635465 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.636231 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.136214177 +0000 UTC m=+146.265924063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.737810 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.738041 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.237981265 +0000 UTC m=+146.367691151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.738269 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.738847 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.238837999 +0000 UTC m=+146.368547885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.840466 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.840628 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.340597726 +0000 UTC m=+146.470307622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.841062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.842214 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.34217813 +0000 UTC m=+146.471888056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:08 crc kubenswrapper[4813]: I1125 10:34:08.942228 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:08 crc kubenswrapper[4813]: E1125 10:34:08.942448 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.442420586 +0000 UTC m=+146.572130502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.044416 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.044994 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.544978626 +0000 UTC m=+146.674688512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.145958 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.146114 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.646090886 +0000 UTC m=+146.775800772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.146156 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.146529 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.646496977 +0000 UTC m=+146.776206863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.247476 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.247668 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.747640508 +0000 UTC m=+146.877350394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.247856 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.248120 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.748113051 +0000 UTC m=+146.877822937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.349451 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.349546 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.84952939 +0000 UTC m=+146.979239276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.349726 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.350004 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.849996523 +0000 UTC m=+146.979706399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.450548 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.450752 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.950731073 +0000 UTC m=+147.080440959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.450796 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.451117 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:09.951107123 +0000 UTC m=+147.080817019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.524911 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" event={"ID":"0120e24c-5159-481f-a3d3-e802a58be557","Type":"ContainerStarted","Data":"66793920184b3070c5ab7b45edd7d084c68e95698ce64ccfad0fada131c962c8"} Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.533901 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-482dq" event={"ID":"a4fc4e54-61da-43ab-934e-5f7ed6178ab6","Type":"ContainerStarted","Data":"fca302ad7b4801f6d55e5464ef2f6bc64ce853c553ac4696261fb261ae51b113"} Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.535985 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" event={"ID":"52c2799b-2750-4c0e-8a0b-b1112a7c25f1","Type":"ContainerStarted","Data":"85d6e6786ec127d5391ae09453b4e902fe8d42c0c797160e659b33d8b90b44f1"} Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.551720 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.552357 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.052342757 +0000 UTC m=+147.182052643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.657008 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.660341 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.160322315 +0000 UTC m=+147.290032211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.760445 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.760720 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.260657744 +0000 UTC m=+147.390367630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.761012 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.761324 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.261307852 +0000 UTC m=+147.391017738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.862807 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.863023 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.362986808 +0000 UTC m=+147.492696684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.863270 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.863574 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.363560903 +0000 UTC m=+147.493270879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:09 crc kubenswrapper[4813]: I1125 10:34:09.964823 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:09 crc kubenswrapper[4813]: E1125 10:34:09.965190 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.465174767 +0000 UTC m=+147.594884653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.066283 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.066660 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.566648048 +0000 UTC m=+147.696357934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.168087 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.168249 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.668232391 +0000 UTC m=+147.797942277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.169029 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.169320 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.66931041 +0000 UTC m=+147.799020296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.269702 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.269905 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.769879246 +0000 UTC m=+147.899589132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.270075 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.270397 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.770384409 +0000 UTC m=+147.900094385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.371366 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.371568 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.871537751 +0000 UTC m=+148.001247637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.372201 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.872186149 +0000 UTC m=+148.001896035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.371798 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.473383 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.473542 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.973520575 +0000 UTC m=+148.103230461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.474150 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.474423 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:10.97441572 +0000 UTC m=+148.104125606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.541399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rpfp2" event={"ID":"17571cbf-de36-4b34-af0b-3db7493adaf4","Type":"ContainerStarted","Data":"003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.542980 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" event={"ID":"8bce249f-7fe9-4bf4-abaa-7c8bc254b488","Type":"ContainerStarted","Data":"421c3391ad438008f1ded76d46cca35aa36f76d7df2dad2f882756798a2084c2"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.544090 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tfjp7" event={"ID":"7a14e28a-ae0c-47e5-b762-cc2f4f191b83","Type":"ContainerStarted","Data":"e603d91ff70d5cb5b79ca03fdd0502402952ddd35885b24b12ca5d6cce2a5de2"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.545105 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" event={"ID":"9c96548a-a806-471d-9167-c5c58e8323b9","Type":"ContainerStarted","Data":"94046fd8c295da993386f5677e78c14d862d61ddb4365a9beca4de7f1ceb0f1f"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.546229 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" event={"ID":"616a1226-9627-43a9-a1a7-5dfb4cf863d8","Type":"ContainerStarted","Data":"d68e38c9516fa768ef92e030685c969e6246d7a943102e33af925a93295c262f"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.547444 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dsd6j" event={"ID":"1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6","Type":"ContainerStarted","Data":"93f0810b54bdbacbe34f19074276d0e9512acc23f98e01b6184a2aeed04b885a"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.548583 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb" event={"ID":"dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964","Type":"ContainerStarted","Data":"47d45fb1e0fc26def1389c30c3441d175ef3b2f40f3d012ac0f6301d05d20f6f"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.549488 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" event={"ID":"8e74616c-72c8-41c2-901e-272c15e94ee7","Type":"ContainerStarted","Data":"1501ac609d8192c1a8497a56ad76b38b29cfc67c1be5f5a661180298f619b435"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.550632 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" event={"ID":"670af6e7-f49a-40c1-9f2d-c3df905e9e44","Type":"ContainerStarted","Data":"8475003880e18a6f2b6992374c4eda484325b7b3645e9124c895a18668110917"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.551797 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" event={"ID":"bf38abdc-cb92-4ce5-b0c1-0d5c084aa359","Type":"ContainerStarted","Data":"9428d92ccd71b619c038293a9cbb09b4ce7a61b9db5e63eced390cc1cfd6bb49"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.553056 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" event={"ID":"54ad0590-7880-4467-b980-334b0ea3807c","Type":"ContainerStarted","Data":"938fe57b8a3574738ed43a46a69818a5535a66ab08ed6775a2135527b1d030da"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.554420 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" event={"ID":"82507459-3471-4865-80e3-92a53d57f352","Type":"ContainerStarted","Data":"c9b4db7da4f971dce3abd371d2ebb6238e0d6d14b733e3348424e6fca4f467bb"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.556945 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" event={"ID":"71172c07-4152-40e9-92ee-bee73fb6e3da","Type":"ContainerStarted","Data":"34e45fbdce18dd0ea369c196fe97c410bcf38b34204e9e4508a0f20c856a6474"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.558158 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" event={"ID":"b84b3a16-f833-40dc-8356-58bbc7aa3667","Type":"ContainerStarted","Data":"21e03cfb9cd32732a4c437f2113951b3d83cf76e23de847ec36957b5db8c8893"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.559204 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" event={"ID":"51935f3a-e932-43af-b272-01d8d88a1bf3","Type":"ContainerStarted","Data":"dad61a1aca203457866e8418f2e077e70d7b465405ac3cd7b09ad78f39e1ccbe"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.560281 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hvj2g" event={"ID":"b5580f94-06e3-4a91-b3e8-b1d7962438dd","Type":"ContainerStarted","Data":"994372b577cdab69634dbb552480852affb01431439ed7f3dbfb4bf8e2c379d7"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.562311 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" event={"ID":"5cc0ff08-77b5-4ca3-bded-0dd386a5009d","Type":"ContainerStarted","Data":"554445f8076b9de5747517c4c790439acdf4fbab9a7e2da26f1da5876cfec73f"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.564240 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" event={"ID":"4221bb1b-a98a-4ddf-b8cf-21a2db2e2b72","Type":"ContainerStarted","Data":"d734ae9cdc85348308d72ecb4e5dd024355c5c4ef3e6bea18fed1a05fd339a31"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.566128 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-97g57" event={"ID":"3734de4b-3d99-4aa7-bb6e-3dc26e9b687e","Type":"ContainerStarted","Data":"a22c61f2ec88865d1b547109428a0f37ad72801e5b40bc52b75810fe12ab537f"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.567485 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" event={"ID":"5189d915-46f9-4116-b03c-e672fc9a2195","Type":"ContainerStarted","Data":"e309e871c94570a9d7b500260b1c3d19650885027d72744c4a2781f31bb3993a"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.568799 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" event={"ID":"4c4a4714-5f62-440f-ad51-bc55a08ad978","Type":"ContainerStarted","Data":"529bc8b6a1c94ad3395061bd8eec1c50111fdec424658625aefd72bf92566f9d"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.570042 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" event={"ID":"cb5d2b7c-15cb-4214-bd8c-0f9d144567f7","Type":"ContainerStarted","Data":"a3bcfc221c6dc21addfca50020eac7717a786484dacddcecc1f86e0b8ec0bc06"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.571160 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fnjf2" event={"ID":"f88ebc7e-4e3d-4c3f-8f98-ca34b1cc76ad","Type":"ContainerStarted","Data":"a0c9aca1be95d42aa13538a50160d90c12a36a2fbefd51386188689d420960c1"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.572242 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wsrtz" event={"ID":"b734673f-c958-487f-8871-cf40f8fe8e0b","Type":"ContainerStarted","Data":"cd3fcc11c39f5552ea61898c566efa86c6fadfac32876dc6b4a3ed999257b29f"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.573323 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lsg8f" event={"ID":"666f7543-9986-44d6-a7e4-dec723ea6a19","Type":"ContainerStarted","Data":"9aa88984ec1cc069b47406138bec36cdce8bccd1c76987dd9fcbf6998eb0e5de"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.574490 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" event={"ID":"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7","Type":"ContainerStarted","Data":"f6a754416ef8f8dca249538af69dc20edd07ab33c2c5f019f6ed1ce077a02edc"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.574830 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.574950 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.074935573 +0000 UTC m=+148.204645459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.575038 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.575350 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.075339115 +0000 UTC m=+148.205049001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.575620 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" event={"ID":"302f6a62-c67c-48ef-97bc-9b53cdf5f67e","Type":"ContainerStarted","Data":"ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95"} Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.594067 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w7ltb" podStartSLOduration=126.594043327 podStartE2EDuration="2m6.594043327s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:10.591827886 +0000 UTC m=+147.721537792" watchObservedRunningTime="2025-11-25 10:34:10.594043327 +0000 UTC m=+147.723753223" Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.625572 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-87pc9" podStartSLOduration=127.6255571 podStartE2EDuration="2m7.6255571s" podCreationTimestamp="2025-11-25 10:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:10.623991428 +0000 UTC m=+147.753701334" watchObservedRunningTime="2025-11-25 10:34:10.6255571 +0000 UTC m=+147.755266986" Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.659671 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-wfr92" podStartSLOduration=126.659653905 podStartE2EDuration="2m6.659653905s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:10.642498255 +0000 UTC m=+147.772208161" watchObservedRunningTime="2025-11-25 10:34:10.659653905 +0000 UTC m=+147.789363791" Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.676236 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.676423 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.176392503 +0000 UTC m=+148.306102389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.676741 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.678247 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.178231544 +0000 UTC m=+148.307941460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.778413 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.778535 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.278510721 +0000 UTC m=+148.408220617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.778729 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.779000 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.278992504 +0000 UTC m=+148.408702380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.880444 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.880648 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.380616638 +0000 UTC m=+148.510326534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.880733 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.881033 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.381017229 +0000 UTC m=+148.510727155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.981695 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.981801 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.48177599 +0000 UTC m=+148.611485876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:10 crc kubenswrapper[4813]: I1125 10:34:10.982276 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:10 crc kubenswrapper[4813]: E1125 10:34:10.982561 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.482550781 +0000 UTC m=+148.612260667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.083582 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:11 crc kubenswrapper[4813]: E1125 10:34:11.083780 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.583749164 +0000 UTC m=+148.713459050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.083851 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:11 crc kubenswrapper[4813]: E1125 10:34:11.084217 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.584197346 +0000 UTC m=+148.713907302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.185326 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:11 crc kubenswrapper[4813]: E1125 10:34:11.185465 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.68543953 +0000 UTC m=+148.815149416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.185577 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:11 crc kubenswrapper[4813]: E1125 10:34:11.185917 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.685908773 +0000 UTC m=+148.815618659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.288313 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:11 crc kubenswrapper[4813]: E1125 10:34:11.288787 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.788769551 +0000 UTC m=+148.918479437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.389478 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:11 crc kubenswrapper[4813]: E1125 10:34:11.389817 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.889805189 +0000 UTC m=+149.019515075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.494205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:11 crc kubenswrapper[4813]: E1125 10:34:11.494366 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.994345993 +0000 UTC m=+149.124055889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.496643 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:11 crc kubenswrapper[4813]: E1125 10:34:11.497036 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:11.997018926 +0000 UTC m=+149.126728812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.583551 4813 generic.go:334] "Generic (PLEG): container finished" podID="54ad0590-7880-4467-b980-334b0ea3807c" containerID="938fe57b8a3574738ed43a46a69818a5535a66ab08ed6775a2135527b1d030da" exitCode=0 Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.583655 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" event={"ID":"54ad0590-7880-4467-b980-334b0ea3807c","Type":"ContainerDied","Data":"938fe57b8a3574738ed43a46a69818a5535a66ab08ed6775a2135527b1d030da"} Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.584782 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.584840 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.586447 4813 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mpgj4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.586486 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" podUID="8bce249f-7fe9-4bf4-abaa-7c8bc254b488" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.587035 4813 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-shxrh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.587077 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" podUID="5189d915-46f9-4116-b03c-e672fc9a2195" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.597613 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-482dq" podStartSLOduration=127.597587022 podStartE2EDuration="2m7.597587022s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:10.658523694 +0000 UTC m=+147.788233600" watchObservedRunningTime="2025-11-25 10:34:11.597587022 +0000 UTC m=+148.727296928" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.598087 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:11 crc kubenswrapper[4813]: E1125 10:34:11.598319 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:12.098294081 +0000 UTC m=+149.228003997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.614124 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" podStartSLOduration=127.614103574 podStartE2EDuration="2m7.614103574s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.611222855 +0000 UTC m=+148.740932771" watchObservedRunningTime="2025-11-25 10:34:11.614103574 +0000 UTC m=+148.743813460" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.625676 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fnjf2" podStartSLOduration=8.625658371 podStartE2EDuration="8.625658371s" podCreationTimestamp="2025-11-25 10:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.624782237 +0000 UTC m=+148.754492133" watchObservedRunningTime="2025-11-25 10:34:11.625658371 +0000 UTC m=+148.755368257" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.646212 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ld2mj" podStartSLOduration=127.646192842 podStartE2EDuration="2m7.646192842s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.643094567 +0000 UTC m=+148.772804473" watchObservedRunningTime="2025-11-25 10:34:11.646192842 +0000 UTC m=+148.775902728" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.662047 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hdv9w" podStartSLOduration=127.662025586 podStartE2EDuration="2m7.662025586s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.660726381 +0000 UTC m=+148.790436277" watchObservedRunningTime="2025-11-25 10:34:11.662025586 +0000 UTC m=+148.791735472" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.699639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:11 crc kubenswrapper[4813]: E1125 10:34:11.700281 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:12.200262534 +0000 UTC m=+149.329972470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.718798 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7hmqn" podStartSLOduration=127.718781641 podStartE2EDuration="2m7.718781641s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.718465913 +0000 UTC m=+148.848175819" watchObservedRunningTime="2025-11-25 10:34:11.718781641 +0000 UTC m=+148.848491527" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.746576 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d2ltx" podStartSLOduration=127.746560382 podStartE2EDuration="2m7.746560382s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.744030053 +0000 UTC m=+148.873739959" watchObservedRunningTime="2025-11-25 10:34:11.746560382 +0000 UTC m=+148.876270268" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.803315 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.803493 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:34:11 crc kubenswrapper[4813]: E1125 10:34:11.803867 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:12.303834731 +0000 UTC m=+149.433544617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.816873 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.817593 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9f6m" podStartSLOduration=127.817574118 podStartE2EDuration="2m7.817574118s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.816739225 +0000 UTC m=+148.946449121" watchObservedRunningTime="2025-11-25 10:34:11.817574118 +0000 UTC m=+148.947284004" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.818626 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-97g57" podStartSLOduration=9.818619246 podStartE2EDuration="9.818619246s" podCreationTimestamp="2025-11-25 10:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.781830349 +0000 UTC m=+148.911540245" watchObservedRunningTime="2025-11-25 10:34:11.818619246 +0000 UTC m=+148.948329132" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.861545 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hvj2g" podStartSLOduration=127.861520422 podStartE2EDuration="2m7.861520422s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.854774447 +0000 UTC m=+148.984484363" watchObservedRunningTime="2025-11-25 10:34:11.861520422 +0000 UTC m=+148.991230308" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.893884 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" podStartSLOduration=127.893863658 podStartE2EDuration="2m7.893863658s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.887312689 +0000 UTC m=+149.017022565" watchObservedRunningTime="2025-11-25 10:34:11.893863658 +0000 UTC m=+149.023573564" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.904482 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.904770 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.904854 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:34:11 crc kubenswrapper[4813]: E1125 10:34:11.904962 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:12.404940811 +0000 UTC m=+149.534650768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.905082 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.910425 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.911902 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.914415 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" podStartSLOduration=127.914401891 podStartE2EDuration="2m7.914401891s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.911846331 +0000 UTC m=+149.041556237" watchObservedRunningTime="2025-11-25 10:34:11.914401891 +0000 UTC m=+149.044111787" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.923542 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.941652 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.966661 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hkxnn" podStartSLOduration=127.966642242 podStartE2EDuration="2m7.966642242s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.941729429 +0000 UTC m=+149.071439335" watchObservedRunningTime="2025-11-25 10:34:11.966642242 +0000 UTC m=+149.096352138" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.969058 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.971530 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d8r7" podStartSLOduration=127.971519236 podStartE2EDuration="2m7.971519236s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.965071109 +0000 UTC m=+149.094780995" watchObservedRunningTime="2025-11-25 10:34:11.971519236 +0000 UTC m=+149.101229122" Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.973883 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 25 10:34:11 crc kubenswrapper[4813]: I1125 10:34:11.973946 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.013883 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" podStartSLOduration=128.013863346 podStartE2EDuration="2m8.013863346s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:12.013370872 +0000 UTC m=+149.143080768" watchObservedRunningTime="2025-11-25 10:34:12.013863346 +0000 UTC m=+149.143573232" Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.013976 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.014039 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:12.51402073 +0000 UTC m=+149.643730616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.015665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.015256 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vn7cb" podStartSLOduration=128.015248304 podStartE2EDuration="2m8.015248304s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:11.985508459 +0000 UTC m=+149.115218345" watchObservedRunningTime="2025-11-25 10:34:12.015248304 +0000 UTC m=+149.144958200" Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.016105 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:12.516090227 +0000 UTC m=+149.645800113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.040967 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.051423 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.116533 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.116961 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:12.61694579 +0000 UTC m=+149.746655676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.220776 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.221490 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:12.721473934 +0000 UTC m=+149.851183820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.322349 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.322761 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:12.822734418 +0000 UTC m=+149.952444294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.323473 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.324037 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:12.824025393 +0000 UTC m=+149.953735279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.425195 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.425475 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:12.925423272 +0000 UTC m=+150.055133158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.425998 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.426529 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:12.926519782 +0000 UTC m=+150.056229668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.526959 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.527144 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.027119128 +0000 UTC m=+150.156829014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.527310 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.527690 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.027658793 +0000 UTC m=+150.157368679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.590278 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0fcc604c0a17a21251cdba0f1ee8b305e74eb420f4a8c1f671f295f062c4c66b"} Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.592081 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" event={"ID":"616a1226-9627-43a9-a1a7-5dfb4cf863d8","Type":"ContainerStarted","Data":"dd924da60543d26f69eea7a794fdbdd6c31d7f584524e9bfb3eb37ed9e5f734d"} Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.593345 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"395c0b450c3b385eaf5fedb876aeaf2f488b05c8ef5bc29daa0bbab493f72b5b"} Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.596270 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" event={"ID":"b543d0c3-b775-4c87-bbd0-016e86361945","Type":"ContainerStarted","Data":"d54123de42887a235d394f5e48a6883b89a89b1f73d15b0b9802756172e94da6"} Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.597385 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bf329709f7a9d8318e228ec08d8ece09fa4dcebec58ea7908daa1a38b5147f21"} Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.599364 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt" event={"ID":"8804f49f-9764-4368-ab35-dcf4dadfb223","Type":"ContainerStarted","Data":"538454a184b67f484d0588d8b8c491475612a7e971763cbf364da4673e314b41"} Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.601556 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" event={"ID":"7a70dbef-bca6-47b6-8814-424cc0cbf441","Type":"ContainerStarted","Data":"a0a49a1490f7a73a14b8d007245ec0fe20daee9bd04ebb81a775f2617453cb49"} Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.602529 4813 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-shxrh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.602578 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" podUID="5189d915-46f9-4116-b03c-e672fc9a2195" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.602698 4813 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mpgj4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.602762 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" podUID="8bce249f-7fe9-4bf4-abaa-7c8bc254b488" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.619305 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-mbr49" podStartSLOduration=128.619276243 podStartE2EDuration="2m8.619276243s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:12.614616315 +0000 UTC m=+149.744326201" watchObservedRunningTime="2025-11-25 10:34:12.619276243 +0000 UTC m=+149.748986129" Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.628478 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.628700 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.12865412 +0000 UTC m=+150.258364006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.628986 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.629483 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.129464732 +0000 UTC m=+150.259174618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.653347 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" podStartSLOduration=128.653331206 podStartE2EDuration="2m8.653331206s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:12.652422011 +0000 UTC m=+149.782131897" watchObservedRunningTime="2025-11-25 10:34:12.653331206 +0000 UTC m=+149.783041092" Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.653905 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-rpfp2" podStartSLOduration=128.653899341 podStartE2EDuration="2m8.653899341s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:12.633635446 +0000 UTC m=+149.763345352" watchObservedRunningTime="2025-11-25 10:34:12.653899341 +0000 UTC m=+149.783609227" Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.729872 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.730019 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.229993126 +0000 UTC m=+150.359703012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.730258 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.731710 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.231671982 +0000 UTC m=+150.361381868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.831821 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.832136 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.332122424 +0000 UTC m=+150.461832300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.933783 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:12 crc kubenswrapper[4813]: E1125 10:34:12.934418 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.434399206 +0000 UTC m=+150.564109092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.971138 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 25 10:34:12 crc kubenswrapper[4813]: I1125 10:34:12.971201 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.034882 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.035057 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.535025663 +0000 UTC m=+150.664735549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.035167 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.035478 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.535467835 +0000 UTC m=+150.665177811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.136095 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.136254 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.636224666 +0000 UTC m=+150.765934562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.136725 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.137066 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.637051939 +0000 UTC m=+150.766761825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.237562 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.237773 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.737744677 +0000 UTC m=+150.867454563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.237941 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.238331 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.738303233 +0000 UTC m=+150.868013119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.339101 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.339325 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.83929312 +0000 UTC m=+150.969003016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.339522 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.339867 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.839853565 +0000 UTC m=+150.969563451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.440294 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.440533 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.940500402 +0000 UTC m=+151.070210288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.440592 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.440941 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:13.940925754 +0000 UTC m=+151.070635740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.542236 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.542417 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.042391964 +0000 UTC m=+151.172101850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.542911 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.543213 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.043204346 +0000 UTC m=+151.172914232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.609068 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" event={"ID":"54ad0590-7880-4467-b980-334b0ea3807c","Type":"ContainerStarted","Data":"9ebccf38453c0e7c2cdcdfb4a7e39733422d25da4d200b9f39d8210ce6b30d3d"} Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.610718 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wsrtz" event={"ID":"b734673f-c958-487f-8871-cf40f8fe8e0b","Type":"ContainerStarted","Data":"044d94c170d09365ff0e17fbcc8d266f3587bac24517a5f257e166ea34f18b71"} Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.612591 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dsd6j" event={"ID":"1ef5c9d6-3f23-49c2-87e0-5c6d76ae0aa6","Type":"ContainerStarted","Data":"6fdb3a0a1ea7b20d24d4a113774682449d355f8da58e0bcaec57c5b63708ccfe"} Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.614370 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" event={"ID":"083d68a5-93e4-4dbd-9ba0-e4e7d30da8f7","Type":"ContainerStarted","Data":"93cca7da26c6cbd58ecf1f117b7113e9a721ab5901826806b45c9ff0968c22ef"} Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.616392 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1144bee85ba160b30e5483296504b7026e64e08f06ec81fe70d6941b1d08f1f1"} Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.618522 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tfjp7" event={"ID":"7a14e28a-ae0c-47e5-b762-cc2f4f191b83","Type":"ContainerStarted","Data":"d4e90a9febd865604e37ff143be5e86279ef12a1fdb056e45938aac2b35182ca"} Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.628405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" event={"ID":"cb5d2b7c-15cb-4214-bd8c-0f9d144567f7","Type":"ContainerStarted","Data":"ff35f528f6cf7680e636b53d39b0bebc0843ee55a0fd471eb8e075feffaef472"} Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.643672 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.643786 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.143766291 +0000 UTC m=+151.273476187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.644103 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.644437 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.144426199 +0000 UTC m=+151.274136085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.711523 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" podStartSLOduration=129.711501917 podStartE2EDuration="2m9.711501917s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:13.708822694 +0000 UTC m=+150.838532600" watchObservedRunningTime="2025-11-25 10:34:13.711501917 +0000 UTC m=+150.841211803" Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.745377 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.746120 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.246089945 +0000 UTC m=+151.375799831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.749220 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.750098 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.250083954 +0000 UTC m=+151.379793830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.852305 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.853018 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.352997834 +0000 UTC m=+151.482707720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.954640 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:13 crc kubenswrapper[4813]: E1125 10:34:13.954975 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.454964948 +0000 UTC m=+151.584674834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.971099 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 25 10:34:13 crc kubenswrapper[4813]: I1125 10:34:13.971163 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.055547 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.055707 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.555688597 +0000 UTC m=+151.685398483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.055850 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.056153 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.55614494 +0000 UTC m=+151.685854826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.157274 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.157960 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.657943319 +0000 UTC m=+151.787653205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.259604 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.260026 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.760006055 +0000 UTC m=+151.889715941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.360196 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.360393 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.860350944 +0000 UTC m=+151.990060840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.360482 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.360884 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.860873728 +0000 UTC m=+151.990583624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.462104 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.462251 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.962221745 +0000 UTC m=+152.091931641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.462655 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.462998 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:14.962984686 +0000 UTC m=+152.092694572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.549821 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.550403 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.554055 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.554399 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.562010 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.563459 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.563634 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.063599033 +0000 UTC m=+152.193308929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.563795 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.564110 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.064102467 +0000 UTC m=+152.193812353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.628269 4813 generic.go:334] "Generic (PLEG): container finished" podID="670af6e7-f49a-40c1-9f2d-c3df905e9e44" containerID="8475003880e18a6f2b6992374c4eda484325b7b3645e9124c895a18668110917" exitCode=0 Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.628340 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" event={"ID":"670af6e7-f49a-40c1-9f2d-c3df905e9e44","Type":"ContainerDied","Data":"8475003880e18a6f2b6992374c4eda484325b7b3645e9124c895a18668110917"} Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.630598 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" event={"ID":"b84b3a16-f833-40dc-8356-58bbc7aa3667","Type":"ContainerStarted","Data":"db6d46f3be65c7039cd52dd7594147e43f176e357ae1052f8e5353f9d60894a1"} Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.632353 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c68cd41c7613f796c827017db9371f8cfd9dd54aa98ad3ed51e40533c8e7dcab"} Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.634383 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lsg8f" event={"ID":"666f7543-9986-44d6-a7e4-dec723ea6a19","Type":"ContainerStarted","Data":"623f25fdf8e19b699dba5cbe76d5263132bd645b7b7aa9d49a615242b679799e"} Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.634954 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.636569 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" event={"ID":"61f4c501-c97d-4a5b-9105-1918dec567a8","Type":"ContainerStarted","Data":"6b6046974117e56d0154148e429add6aee97a4a97dc81c68d851aff5f671dbf6"} Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.639326 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" event={"ID":"7a70dbef-bca6-47b6-8814-424cc0cbf441","Type":"ContainerStarted","Data":"992db843651829a4345e9103ba3baa97d3509c89bfad3a1ea58894c7d95f562a"} Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.641368 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3167a1f2c0478e62444517b92bb2a5e4b66cfa2d75f59973b5a8517528a72692"} Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.641696 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.643246 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" event={"ID":"4c4a4714-5f62-440f-ad51-bc55a08ad978","Type":"ContainerStarted","Data":"ca8f7b1cbe71155b594bd7f3bbff8d74cff118556de00d4b5a5d5118e1c55de5"} Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.644034 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.647428 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.647458 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.649505 4813 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-cg7wn container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.649573 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" podUID="b543d0c3-b775-4c87-bbd0-016e86361945" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.665205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.665357 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.16532791 +0000 UTC m=+152.295037816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.665598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.665779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20964ab5-e31f-4fa0-8f95-807eca78e10e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"20964ab5-e31f-4fa0-8f95-807eca78e10e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.665853 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20964ab5-e31f-4fa0-8f95-807eca78e10e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"20964ab5-e31f-4fa0-8f95-807eca78e10e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.666058 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.166043369 +0000 UTC m=+152.295753255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.669291 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlrc4" podStartSLOduration=130.669255098 podStartE2EDuration="2m10.669255098s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:14.66532315 +0000 UTC m=+151.795033056" watchObservedRunningTime="2025-11-25 10:34:14.669255098 +0000 UTC m=+151.798964984" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.690605 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lsg8f" podStartSLOduration=12.690567771 podStartE2EDuration="12.690567771s" podCreationTimestamp="2025-11-25 10:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:14.688087733 +0000 UTC m=+151.817797649" watchObservedRunningTime="2025-11-25 10:34:14.690567771 +0000 UTC m=+151.820277677" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.762952 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" podStartSLOduration=131.762935244 podStartE2EDuration="2m11.762935244s" podCreationTimestamp="2025-11-25 10:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:14.760835437 +0000 UTC m=+151.890545343" watchObservedRunningTime="2025-11-25 10:34:14.762935244 +0000 UTC m=+151.892645120" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.767020 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.767181 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.26715386 +0000 UTC m=+152.396863756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.767324 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20964ab5-e31f-4fa0-8f95-807eca78e10e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"20964ab5-e31f-4fa0-8f95-807eca78e10e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.767438 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20964ab5-e31f-4fa0-8f95-807eca78e10e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"20964ab5-e31f-4fa0-8f95-807eca78e10e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.767954 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.770036 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20964ab5-e31f-4fa0-8f95-807eca78e10e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"20964ab5-e31f-4fa0-8f95-807eca78e10e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.772260 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.272242549 +0000 UTC m=+152.401952435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.800748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20964ab5-e31f-4fa0-8f95-807eca78e10e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"20964ab5-e31f-4fa0-8f95-807eca78e10e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.822381 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.824102 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-48zrm" podStartSLOduration=130.82409018 podStartE2EDuration="2m10.82409018s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:14.821650993 +0000 UTC m=+151.951360879" watchObservedRunningTime="2025-11-25 10:34:14.82409018 +0000 UTC m=+151.953800066" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.831010 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.839913 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-wfr92" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.850686 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.868981 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.869179 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.369161925 +0000 UTC m=+152.498871811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.869604 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.869935 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.369925585 +0000 UTC m=+152.499635471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.873495 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wsrtz" podStartSLOduration=130.873471833 podStartE2EDuration="2m10.873471833s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:14.871836948 +0000 UTC m=+152.001546834" watchObservedRunningTime="2025-11-25 10:34:14.873471833 +0000 UTC m=+152.003181719" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.895637 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-dsd6j" podStartSLOduration=130.895619619 podStartE2EDuration="2m10.895619619s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:14.891144887 +0000 UTC m=+152.020854773" watchObservedRunningTime="2025-11-25 10:34:14.895619619 +0000 UTC m=+152.025329515" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.915792 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kht7r" podStartSLOduration=130.915769701 podStartE2EDuration="2m10.915769701s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:14.909945792 +0000 UTC m=+152.039655678" watchObservedRunningTime="2025-11-25 10:34:14.915769701 +0000 UTC m=+152.045479587" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.923551 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.934191 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" podStartSLOduration=130.934176016 podStartE2EDuration="2m10.934176016s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:14.933301812 +0000 UTC m=+152.063011718" watchObservedRunningTime="2025-11-25 10:34:14.934176016 +0000 UTC m=+152.063885902" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.953238 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hf42t" podStartSLOduration=130.953219848 podStartE2EDuration="2m10.953219848s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:14.95039064 +0000 UTC m=+152.080100536" watchObservedRunningTime="2025-11-25 10:34:14.953219848 +0000 UTC m=+152.082929734" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.966646 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.966711 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.969058 4813 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5ngzq container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.969209 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" podUID="7a70dbef-bca6-47b6-8814-424cc0cbf441" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.970635 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:14 crc kubenswrapper[4813]: E1125 10:34:14.972148 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.472130446 +0000 UTC m=+152.601840362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.978189 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:14 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:14 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:14 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.978245 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:14 crc kubenswrapper[4813]: I1125 10:34:14.992194 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tfjp7" podStartSLOduration=130.992171695 podStartE2EDuration="2m10.992171695s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:14.990587541 +0000 UTC m=+152.120297457" watchObservedRunningTime="2025-11-25 10:34:14.992171695 +0000 UTC m=+152.121881591" Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.008652 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.042198 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7b2wt" podStartSLOduration=131.042183445 podStartE2EDuration="2m11.042183445s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:15.040635792 +0000 UTC m=+152.170345688" watchObservedRunningTime="2025-11-25 10:34:15.042183445 +0000 UTC m=+152.171893331" Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.063442 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" podStartSLOduration=131.063415787 podStartE2EDuration="2m11.063415787s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:15.057822733 +0000 UTC m=+152.187532629" watchObservedRunningTime="2025-11-25 10:34:15.063415787 +0000 UTC m=+152.193125663" Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.073125 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.073543 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.573527344 +0000 UTC m=+152.703237230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.177341 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.177952 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.677913354 +0000 UTC m=+152.807623240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.279609 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.280067 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.780046111 +0000 UTC m=+152.909756067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.299376 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.380797 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.381023 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.880997597 +0000 UTC m=+153.010707483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.381123 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.381431 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.881420888 +0000 UTC m=+153.011130854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.473012 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-482dq" Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.476339 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.476387 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.476384 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.476635 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.476841 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.476956 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.482499 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.482717 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.982672892 +0000 UTC m=+153.112382788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.482805 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.483243 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:15.983231438 +0000 UTC m=+153.112941314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.585252 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.585459 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.085428098 +0000 UTC m=+153.215137994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.585759 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.586111 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.086101376 +0000 UTC m=+153.215811332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.649110 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"20964ab5-e31f-4fa0-8f95-807eca78e10e","Type":"ContainerStarted","Data":"94a3c20579235c97253e30e7dd4f95d50aeed859a0fb82b9b7b87f803d95ceee"} Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.687116 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.687620 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.187601127 +0000 UTC m=+153.317311013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.687699 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.688062 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.188042639 +0000 UTC m=+153.317752575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.788465 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.788660 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.288619225 +0000 UTC m=+153.418329111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.790165 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.791415 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.291401061 +0000 UTC m=+153.421111037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.890911 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.891153 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.391121773 +0000 UTC m=+153.520831659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.935352 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.968322 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.973020 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:15 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:15 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:15 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.973095 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.992053 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:15 crc kubenswrapper[4813]: E1125 10:34:15.992376 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.492361997 +0000 UTC m=+153.622071893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.994359 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-shxrh" Nov 25 10:34:15 crc kubenswrapper[4813]: I1125 10:34:15.996868 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mpgj4" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.008788 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.038808 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.070111 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.093394 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/670af6e7-f49a-40c1-9f2d-c3df905e9e44-config-volume\") pod \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\" (UID: \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\") " Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.093977 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/670af6e7-f49a-40c1-9f2d-c3df905e9e44-config-volume" (OuterVolumeSpecName: "config-volume") pod "670af6e7-f49a-40c1-9f2d-c3df905e9e44" (UID: "670af6e7-f49a-40c1-9f2d-c3df905e9e44"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.094134 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.094258 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.594240518 +0000 UTC m=+153.723950404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.094441 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/670af6e7-f49a-40c1-9f2d-c3df905e9e44-secret-volume\") pod \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\" (UID: \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\") " Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.094632 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fswr5\" (UniqueName: \"kubernetes.io/projected/670af6e7-f49a-40c1-9f2d-c3df905e9e44-kube-api-access-fswr5\") pod \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\" (UID: \"670af6e7-f49a-40c1-9f2d-c3df905e9e44\") " Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.095485 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.595469072 +0000 UTC m=+153.725178948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.095035 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.096543 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/670af6e7-f49a-40c1-9f2d-c3df905e9e44-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.108273 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670af6e7-f49a-40c1-9f2d-c3df905e9e44-kube-api-access-fswr5" (OuterVolumeSpecName: "kube-api-access-fswr5") pod "670af6e7-f49a-40c1-9f2d-c3df905e9e44" (UID: "670af6e7-f49a-40c1-9f2d-c3df905e9e44"). InnerVolumeSpecName "kube-api-access-fswr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.113160 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670af6e7-f49a-40c1-9f2d-c3df905e9e44-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "670af6e7-f49a-40c1-9f2d-c3df905e9e44" (UID: "670af6e7-f49a-40c1-9f2d-c3df905e9e44"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.199142 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.199557 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.699537563 +0000 UTC m=+153.829247449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.199579 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/670af6e7-f49a-40c1-9f2d-c3df905e9e44-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.199591 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fswr5\" (UniqueName: \"kubernetes.io/projected/670af6e7-f49a-40c1-9f2d-c3df905e9e44-kube-api-access-fswr5\") on node \"crc\" DevicePath \"\"" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.300310 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.300745 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.800727515 +0000 UTC m=+153.930437401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.389989 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.390052 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.391540 4813 patch_prober.go:28] interesting pod/console-f9d7485db-rpfp2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.391598 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rpfp2" podUID="17571cbf-de36-4b34-af0b-3db7493adaf4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.401867 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.402145 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.902100983 +0000 UTC m=+154.031810879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.402273 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.402818 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:16.902795532 +0000 UTC m=+154.032505608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.503728 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.504003 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.003962003 +0000 UTC m=+154.133671899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.504223 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.504691 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.004667433 +0000 UTC m=+154.134377319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.514029 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8xspn" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.606198 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.606343 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.106323858 +0000 UTC m=+154.236033744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.606662 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.607118 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.107097349 +0000 UTC m=+154.236807245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.656124 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" event={"ID":"670af6e7-f49a-40c1-9f2d-c3df905e9e44","Type":"ContainerDied","Data":"7825d90c139d9e90969bc63d8506802242d57d01f418c5d5fac646c8e071e0fd"} Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.656196 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7825d90c139d9e90969bc63d8506802242d57d01f418c5d5fac646c8e071e0fd" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.656143 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-g625d" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.657948 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"20964ab5-e31f-4fa0-8f95-807eca78e10e","Type":"ContainerStarted","Data":"b293a1748cc789a9ba35ba8746a6d75bd2689518b586923d0b0f8b596392de12"} Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.682030 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.682010302 podStartE2EDuration="2.682010302s" podCreationTimestamp="2025-11-25 10:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:16.679463452 +0000 UTC m=+153.809173348" watchObservedRunningTime="2025-11-25 10:34:16.682010302 +0000 UTC m=+153.811720188" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.707378 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.708820 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.208803056 +0000 UTC m=+154.338512942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.809553 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.309539056 +0000 UTC m=+154.439248942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.809750 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.910918 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.911106 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.411079237 +0000 UTC m=+154.540789123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.911155 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:16 crc kubenswrapper[4813]: E1125 10:34:16.911434 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.411422427 +0000 UTC m=+154.541132313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.972566 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:16 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:16 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:16 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:16 crc kubenswrapper[4813]: I1125 10:34:16.972627 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.012865 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.013055 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.513024441 +0000 UTC m=+154.642734347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.013271 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.013622 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.513610787 +0000 UTC m=+154.643320673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.114423 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.114639 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.614592453 +0000 UTC m=+154.744302339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.114715 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.115006 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.614991954 +0000 UTC m=+154.744701830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.216393 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.216655 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.716609928 +0000 UTC m=+154.846319814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.216883 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.217248 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.717234415 +0000 UTC m=+154.846944301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.309309 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rhgxx"] Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.309511 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670af6e7-f49a-40c1-9f2d-c3df905e9e44" containerName="collect-profiles" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.309522 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="670af6e7-f49a-40c1-9f2d-c3df905e9e44" containerName="collect-profiles" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.309607 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="670af6e7-f49a-40c1-9f2d-c3df905e9e44" containerName="collect-profiles" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.310284 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.314261 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.317795 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.317917 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.817898813 +0000 UTC m=+154.947608699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.318127 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.318446 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.818439068 +0000 UTC m=+154.948148954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.328695 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhgxx"] Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.418861 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.419063 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.919028884 +0000 UTC m=+155.048738770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.419123 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.419264 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5deac33-30de-491e-94ff-53fe67de0eb8-utilities\") pod \"certified-operators-rhgxx\" (UID: \"a5deac33-30de-491e-94ff-53fe67de0eb8\") " pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.419304 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2vzd\" (UniqueName: \"kubernetes.io/projected/a5deac33-30de-491e-94ff-53fe67de0eb8-kube-api-access-s2vzd\") pod \"certified-operators-rhgxx\" (UID: \"a5deac33-30de-491e-94ff-53fe67de0eb8\") " pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.419420 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5deac33-30de-491e-94ff-53fe67de0eb8-catalog-content\") pod \"certified-operators-rhgxx\" (UID: \"a5deac33-30de-491e-94ff-53fe67de0eb8\") " pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.419471 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:17.919455536 +0000 UTC m=+155.049165492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.520605 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.520826 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:18.020788842 +0000 UTC m=+155.150498728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.520963 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5deac33-30de-491e-94ff-53fe67de0eb8-utilities\") pod \"certified-operators-rhgxx\" (UID: \"a5deac33-30de-491e-94ff-53fe67de0eb8\") " pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.520998 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2vzd\" (UniqueName: \"kubernetes.io/projected/a5deac33-30de-491e-94ff-53fe67de0eb8-kube-api-access-s2vzd\") pod \"certified-operators-rhgxx\" (UID: \"a5deac33-30de-491e-94ff-53fe67de0eb8\") " pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.521055 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5deac33-30de-491e-94ff-53fe67de0eb8-catalog-content\") pod \"certified-operators-rhgxx\" (UID: \"a5deac33-30de-491e-94ff-53fe67de0eb8\") " pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.521110 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.521410 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5deac33-30de-491e-94ff-53fe67de0eb8-utilities\") pod \"certified-operators-rhgxx\" (UID: \"a5deac33-30de-491e-94ff-53fe67de0eb8\") " pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.521446 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:18.02143113 +0000 UTC m=+155.151141016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.521542 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5deac33-30de-491e-94ff-53fe67de0eb8-catalog-content\") pod \"certified-operators-rhgxx\" (UID: \"a5deac33-30de-491e-94ff-53fe67de0eb8\") " pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.525758 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mv2q6"] Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.527671 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.530476 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.545819 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.556488 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mv2q6"] Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.564562 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2vzd\" (UniqueName: \"kubernetes.io/projected/a5deac33-30de-491e-94ff-53fe67de0eb8-kube-api-access-s2vzd\") pod \"certified-operators-rhgxx\" (UID: \"a5deac33-30de-491e-94ff-53fe67de0eb8\") " pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.628227 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.628547 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcnh5\" (UniqueName: \"kubernetes.io/projected/4c9a79a8-32f8-4018-b6e7-a76164389632-kube-api-access-lcnh5\") pod \"community-operators-mv2q6\" (UID: \"4c9a79a8-32f8-4018-b6e7-a76164389632\") " pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.628631 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c9a79a8-32f8-4018-b6e7-a76164389632-catalog-content\") pod \"community-operators-mv2q6\" (UID: \"4c9a79a8-32f8-4018-b6e7-a76164389632\") " pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.628666 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c9a79a8-32f8-4018-b6e7-a76164389632-utilities\") pod \"community-operators-mv2q6\" (UID: \"4c9a79a8-32f8-4018-b6e7-a76164389632\") " pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.628802 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:18.128783221 +0000 UTC m=+155.258493107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.631993 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.682187 4813 generic.go:334] "Generic (PLEG): container finished" podID="20964ab5-e31f-4fa0-8f95-807eca78e10e" containerID="b293a1748cc789a9ba35ba8746a6d75bd2689518b586923d0b0f8b596392de12" exitCode=0 Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.682244 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"20964ab5-e31f-4fa0-8f95-807eca78e10e","Type":"ContainerDied","Data":"b293a1748cc789a9ba35ba8746a6d75bd2689518b586923d0b0f8b596392de12"} Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.693181 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-trh25"] Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.694121 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.713708 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trh25"] Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.730531 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-utilities\") pod \"certified-operators-trh25\" (UID: \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\") " pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.730582 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-catalog-content\") pod \"certified-operators-trh25\" (UID: \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\") " pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.730622 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcnh5\" (UniqueName: \"kubernetes.io/projected/4c9a79a8-32f8-4018-b6e7-a76164389632-kube-api-access-lcnh5\") pod \"community-operators-mv2q6\" (UID: \"4c9a79a8-32f8-4018-b6e7-a76164389632\") " pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.730646 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6xgg\" (UniqueName: \"kubernetes.io/projected/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-kube-api-access-g6xgg\") pod \"certified-operators-trh25\" (UID: \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\") " pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.730733 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c9a79a8-32f8-4018-b6e7-a76164389632-catalog-content\") pod \"community-operators-mv2q6\" (UID: \"4c9a79a8-32f8-4018-b6e7-a76164389632\") " pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.730769 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c9a79a8-32f8-4018-b6e7-a76164389632-utilities\") pod \"community-operators-mv2q6\" (UID: \"4c9a79a8-32f8-4018-b6e7-a76164389632\") " pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.730804 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.731137 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:18.231108944 +0000 UTC m=+155.360818830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.731881 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c9a79a8-32f8-4018-b6e7-a76164389632-catalog-content\") pod \"community-operators-mv2q6\" (UID: \"4c9a79a8-32f8-4018-b6e7-a76164389632\") " pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.738967 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c9a79a8-32f8-4018-b6e7-a76164389632-utilities\") pod \"community-operators-mv2q6\" (UID: \"4c9a79a8-32f8-4018-b6e7-a76164389632\") " pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.787995 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcnh5\" (UniqueName: \"kubernetes.io/projected/4c9a79a8-32f8-4018-b6e7-a76164389632-kube-api-access-lcnh5\") pod \"community-operators-mv2q6\" (UID: \"4c9a79a8-32f8-4018-b6e7-a76164389632\") " pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.836254 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.836530 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-utilities\") pod \"certified-operators-trh25\" (UID: \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\") " pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.836594 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-catalog-content\") pod \"certified-operators-trh25\" (UID: \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\") " pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.836616 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6xgg\" (UniqueName: \"kubernetes.io/projected/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-kube-api-access-g6xgg\") pod \"certified-operators-trh25\" (UID: \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\") " pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.837190 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-utilities\") pod \"certified-operators-trh25\" (UID: \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\") " pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.837708 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:18.337667034 +0000 UTC m=+155.467376990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.841972 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-catalog-content\") pod \"certified-operators-trh25\" (UID: \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\") " pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.845952 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.863350 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6xgg\" (UniqueName: \"kubernetes.io/projected/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-kube-api-access-g6xgg\") pod \"certified-operators-trh25\" (UID: \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\") " pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.906128 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-km2jk"] Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.907209 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.929490 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-km2jk"] Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.941645 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e31e8556-76b9-48db-a630-4a990e4a432a-utilities\") pod \"community-operators-km2jk\" (UID: \"e31e8556-76b9-48db-a630-4a990e4a432a\") " pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.941720 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:17 crc kubenswrapper[4813]: E1125 10:34:17.942058 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:18.442039124 +0000 UTC m=+155.571749160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.942095 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6clhd\" (UniqueName: \"kubernetes.io/projected/e31e8556-76b9-48db-a630-4a990e4a432a-kube-api-access-6clhd\") pod \"community-operators-km2jk\" (UID: \"e31e8556-76b9-48db-a630-4a990e4a432a\") " pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.942134 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e31e8556-76b9-48db-a630-4a990e4a432a-catalog-content\") pod \"community-operators-km2jk\" (UID: \"e31e8556-76b9-48db-a630-4a990e4a432a\") " pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.986875 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:17 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:17 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:17 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:17 crc kubenswrapper[4813]: I1125 10:34:17.987201 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.041043 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.043635 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.044001 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e31e8556-76b9-48db-a630-4a990e4a432a-utilities\") pod \"community-operators-km2jk\" (UID: \"e31e8556-76b9-48db-a630-4a990e4a432a\") " pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.044188 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6clhd\" (UniqueName: \"kubernetes.io/projected/e31e8556-76b9-48db-a630-4a990e4a432a-kube-api-access-6clhd\") pod \"community-operators-km2jk\" (UID: \"e31e8556-76b9-48db-a630-4a990e4a432a\") " pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.044229 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e31e8556-76b9-48db-a630-4a990e4a432a-catalog-content\") pod \"community-operators-km2jk\" (UID: \"e31e8556-76b9-48db-a630-4a990e4a432a\") " pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.044877 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e31e8556-76b9-48db-a630-4a990e4a432a-catalog-content\") pod \"community-operators-km2jk\" (UID: \"e31e8556-76b9-48db-a630-4a990e4a432a\") " pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:34:18 crc kubenswrapper[4813]: E1125 10:34:18.045321 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:18.545284972 +0000 UTC m=+155.674994898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.045394 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e31e8556-76b9-48db-a630-4a990e4a432a-utilities\") pod \"community-operators-km2jk\" (UID: \"e31e8556-76b9-48db-a630-4a990e4a432a\") " pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.073655 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6clhd\" (UniqueName: \"kubernetes.io/projected/e31e8556-76b9-48db-a630-4a990e4a432a-kube-api-access-6clhd\") pod \"community-operators-km2jk\" (UID: \"e31e8556-76b9-48db-a630-4a990e4a432a\") " pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.152631 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:18 crc kubenswrapper[4813]: E1125 10:34:18.153381 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:18.653363463 +0000 UTC m=+155.783073349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.234997 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.257234 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:18 crc kubenswrapper[4813]: E1125 10:34:18.257338 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:18.757321342 +0000 UTC m=+155.887031218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.257657 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:18 crc kubenswrapper[4813]: E1125 10:34:18.258045 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:18.758033681 +0000 UTC m=+155.887743567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.359920 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:18 crc kubenswrapper[4813]: E1125 10:34:18.360329 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:18.860309553 +0000 UTC m=+155.990019449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.373938 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.398392 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.401721 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.404471 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.412844 4813 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.417714 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.462711 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ff51e6c-cd70-4242-8e66-0bcc9b3e7388-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.462754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.462851 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ff51e6c-cd70-4242-8e66-0bcc9b3e7388-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 10:34:18 crc kubenswrapper[4813]: E1125 10:34:18.463146 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:18.96313407 +0000 UTC m=+156.092843956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.546833 4813 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-frcz9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.547296 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" podUID="54ad0590-7880-4467-b980-334b0ea3807c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.552791 4813 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-frcz9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.552858 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" podUID="54ad0590-7880-4467-b980-334b0ea3807c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.564227 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.564525 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ff51e6c-cd70-4242-8e66-0bcc9b3e7388-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.564559 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ff51e6c-cd70-4242-8e66-0bcc9b3e7388-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 10:34:18 crc kubenswrapper[4813]: E1125 10:34:18.564829 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:19.064799526 +0000 UTC m=+156.194509442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.564908 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ff51e6c-cd70-4242-8e66-0bcc9b3e7388-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.604810 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ff51e6c-cd70-4242-8e66-0bcc9b3e7388-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.666436 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:18 crc kubenswrapper[4813]: E1125 10:34:18.667047 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 10:34:19.167032947 +0000 UTC m=+156.296742823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wctdv" (UID: "89fdb811-5cae-4ece-a672-207a7af34036") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.717916 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" event={"ID":"61f4c501-c97d-4a5b-9105-1918dec567a8","Type":"ContainerStarted","Data":"2ce658848acae5f9d622c349cbc5d6ffdf8d748f965b6b8c31797add373ac1f5"} Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.717977 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" event={"ID":"61f4c501-c97d-4a5b-9105-1918dec567a8","Type":"ContainerStarted","Data":"f298731834ec4da580b966cbfd31e60c6a7ed1b6ba657dd53e608455624942b8"} Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.724743 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhgxx"] Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.767717 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:18 crc kubenswrapper[4813]: E1125 10:34:18.768295 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 10:34:19.26827205 +0000 UTC m=+156.397981936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.779359 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mv2q6"] Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.795125 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trh25"] Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.808023 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.810556 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-frcz9" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.818981 4813 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-25T10:34:18.412879873Z","Handler":null,"Name":""} Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.865942 4813 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.865979 4813 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.878800 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.889666 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.889744 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.946753 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wctdv\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.974656 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-km2jk"] Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.978934 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:18 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:18 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:18 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.978996 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.979498 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 10:34:18 crc kubenswrapper[4813]: I1125 10:34:18.994752 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.074236 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.097298 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.117775 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.182963 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20964ab5-e31f-4fa0-8f95-807eca78e10e-kubelet-dir\") pod \"20964ab5-e31f-4fa0-8f95-807eca78e10e\" (UID: \"20964ab5-e31f-4fa0-8f95-807eca78e10e\") " Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.183112 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20964ab5-e31f-4fa0-8f95-807eca78e10e-kube-api-access\") pod \"20964ab5-e31f-4fa0-8f95-807eca78e10e\" (UID: \"20964ab5-e31f-4fa0-8f95-807eca78e10e\") " Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.183507 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20964ab5-e31f-4fa0-8f95-807eca78e10e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "20964ab5-e31f-4fa0-8f95-807eca78e10e" (UID: "20964ab5-e31f-4fa0-8f95-807eca78e10e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.202034 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20964ab5-e31f-4fa0-8f95-807eca78e10e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "20964ab5-e31f-4fa0-8f95-807eca78e10e" (UID: "20964ab5-e31f-4fa0-8f95-807eca78e10e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.285076 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20964ab5-e31f-4fa0-8f95-807eca78e10e-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.285133 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20964ab5-e31f-4fa0-8f95-807eca78e10e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.369432 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wctdv"] Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.482820 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k4dl7"] Nov 25 10:34:19 crc kubenswrapper[4813]: E1125 10:34:19.483074 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20964ab5-e31f-4fa0-8f95-807eca78e10e" containerName="pruner" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.483089 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="20964ab5-e31f-4fa0-8f95-807eca78e10e" containerName="pruner" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.483201 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="20964ab5-e31f-4fa0-8f95-807eca78e10e" containerName="pruner" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.484137 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.486561 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.501341 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4dl7"] Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.595397 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070d675e-0557-4a9e-9a9a-1a5019547e2a-catalog-content\") pod \"redhat-marketplace-k4dl7\" (UID: \"070d675e-0557-4a9e-9a9a-1a5019547e2a\") " pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.595497 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4xgv\" (UniqueName: \"kubernetes.io/projected/070d675e-0557-4a9e-9a9a-1a5019547e2a-kube-api-access-d4xgv\") pod \"redhat-marketplace-k4dl7\" (UID: \"070d675e-0557-4a9e-9a9a-1a5019547e2a\") " pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.595530 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070d675e-0557-4a9e-9a9a-1a5019547e2a-utilities\") pod \"redhat-marketplace-k4dl7\" (UID: \"070d675e-0557-4a9e-9a9a-1a5019547e2a\") " pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.630058 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.655221 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.675291 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cg7wn" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.696537 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070d675e-0557-4a9e-9a9a-1a5019547e2a-catalog-content\") pod \"redhat-marketplace-k4dl7\" (UID: \"070d675e-0557-4a9e-9a9a-1a5019547e2a\") " pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.696639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4xgv\" (UniqueName: \"kubernetes.io/projected/070d675e-0557-4a9e-9a9a-1a5019547e2a-kube-api-access-d4xgv\") pod \"redhat-marketplace-k4dl7\" (UID: \"070d675e-0557-4a9e-9a9a-1a5019547e2a\") " pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.696671 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070d675e-0557-4a9e-9a9a-1a5019547e2a-utilities\") pod \"redhat-marketplace-k4dl7\" (UID: \"070d675e-0557-4a9e-9a9a-1a5019547e2a\") " pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.697432 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070d675e-0557-4a9e-9a9a-1a5019547e2a-catalog-content\") pod \"redhat-marketplace-k4dl7\" (UID: \"070d675e-0557-4a9e-9a9a-1a5019547e2a\") " pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.697793 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070d675e-0557-4a9e-9a9a-1a5019547e2a-utilities\") pod \"redhat-marketplace-k4dl7\" (UID: \"070d675e-0557-4a9e-9a9a-1a5019547e2a\") " pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.719380 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4xgv\" (UniqueName: \"kubernetes.io/projected/070d675e-0557-4a9e-9a9a-1a5019547e2a-kube-api-access-d4xgv\") pod \"redhat-marketplace-k4dl7\" (UID: \"070d675e-0557-4a9e-9a9a-1a5019547e2a\") " pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.778212 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"20964ab5-e31f-4fa0-8f95-807eca78e10e","Type":"ContainerDied","Data":"94a3c20579235c97253e30e7dd4f95d50aeed859a0fb82b9b7b87f803d95ceee"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.778269 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94a3c20579235c97253e30e7dd4f95d50aeed859a0fb82b9b7b87f803d95ceee" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.778398 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.788328 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388","Type":"ContainerStarted","Data":"b7fb6ebf10a44a5e4b6b47c3855b97c6cc8b030b3f9c5cb3e4d3445811ff15e4"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.788367 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388","Type":"ContainerStarted","Data":"5ba7c6610fc3b32ba444bec62c0929b3c5c7711e820f5a05fcb822e79100ac0c"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.819991 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" event={"ID":"61f4c501-c97d-4a5b-9105-1918dec567a8","Type":"ContainerStarted","Data":"3ec95e9b5b65f32b04871e0487cc6066e3d89338c376b27e57700052c99de6f2"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.827023 4813 generic.go:334] "Generic (PLEG): container finished" podID="4c9a79a8-32f8-4018-b6e7-a76164389632" containerID="a9c9c39c578ccc160e22e6c3d9edc861c483fdb9f98e5ce6e49248bded661e84" exitCode=0 Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.827085 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mv2q6" event={"ID":"4c9a79a8-32f8-4018-b6e7-a76164389632","Type":"ContainerDied","Data":"a9c9c39c578ccc160e22e6c3d9edc861c483fdb9f98e5ce6e49248bded661e84"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.827108 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mv2q6" event={"ID":"4c9a79a8-32f8-4018-b6e7-a76164389632","Type":"ContainerStarted","Data":"4a4e55fc969518c81d136129070b808c05e25d901f01f7e6972289b26155f9cd"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.829795 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.831194 4813 generic.go:334] "Generic (PLEG): container finished" podID="a5deac33-30de-491e-94ff-53fe67de0eb8" containerID="466a163e5e20cbae5c0494cedce7a37492fcf70ec6a37cc1c30c8234c9d5e153" exitCode=0 Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.831265 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhgxx" event={"ID":"a5deac33-30de-491e-94ff-53fe67de0eb8","Type":"ContainerDied","Data":"466a163e5e20cbae5c0494cedce7a37492fcf70ec6a37cc1c30c8234c9d5e153"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.831292 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhgxx" event={"ID":"a5deac33-30de-491e-94ff-53fe67de0eb8","Type":"ContainerStarted","Data":"7e9c65c6e2a395f93a1f77b4487c6419c72be03dda10173fa706bbc663a7e4dc"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.844107 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.844080264 podStartE2EDuration="1.844080264s" podCreationTimestamp="2025-11-25 10:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:19.833752711 +0000 UTC m=+156.963462607" watchObservedRunningTime="2025-11-25 10:34:19.844080264 +0000 UTC m=+156.973790150" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.866991 4813 generic.go:334] "Generic (PLEG): container finished" podID="e31e8556-76b9-48db-a630-4a990e4a432a" containerID="6203c8a440a2eedb41013559e4f5a7cd9d2be0cb4b9b1a324b71f85fb0fdb3f7" exitCode=0 Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.867107 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-km2jk" event={"ID":"e31e8556-76b9-48db-a630-4a990e4a432a","Type":"ContainerDied","Data":"6203c8a440a2eedb41013559e4f5a7cd9d2be0cb4b9b1a324b71f85fb0fdb3f7"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.867141 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-km2jk" event={"ID":"e31e8556-76b9-48db-a630-4a990e4a432a","Type":"ContainerStarted","Data":"bf0ab9ecf99d7b0ba421c9af475cced24e556caa96abca886f789bdd0dc9aa4c"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.884408 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" event={"ID":"89fdb811-5cae-4ece-a672-207a7af34036","Type":"ContainerStarted","Data":"d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.884465 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" event={"ID":"89fdb811-5cae-4ece-a672-207a7af34036","Type":"ContainerStarted","Data":"27d28b71d02c039ac11b0c27af575c525fb1ff40e7e02d67ba462d88578c6295"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.885229 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.886957 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-q2vkk" podStartSLOduration=17.886943798 podStartE2EDuration="17.886943798s" podCreationTimestamp="2025-11-25 10:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:19.874274851 +0000 UTC m=+157.003984747" watchObservedRunningTime="2025-11-25 10:34:19.886943798 +0000 UTC m=+157.016653704" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.888159 4813 generic.go:334] "Generic (PLEG): container finished" podID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" containerID="639870e2a2318789b1dea93a5bb8c37f47db6126bb4b8fbea876bf49f560d17a" exitCode=0 Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.889462 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trh25" event={"ID":"9502b5b8-d91d-43fa-9498-4daeb00dd6ba","Type":"ContainerDied","Data":"639870e2a2318789b1dea93a5bb8c37f47db6126bb4b8fbea876bf49f560d17a"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.889488 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trh25" event={"ID":"9502b5b8-d91d-43fa-9498-4daeb00dd6ba","Type":"ContainerStarted","Data":"d6ef8536a459567745d7557971762469e80782e0f56ec43cb5f73ab9d546ccbd"} Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.894264 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tl6vn"] Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.897144 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.944480 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.948446 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl6vn"] Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.987079 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:19 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:19 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:19 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:19 crc kubenswrapper[4813]: I1125 10:34:19.987136 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.001007 4813 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5ngzq container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 25 10:34:20 crc kubenswrapper[4813]: [+]log ok Nov 25 10:34:20 crc kubenswrapper[4813]: [+]etcd ok Nov 25 10:34:20 crc kubenswrapper[4813]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 25 10:34:20 crc kubenswrapper[4813]: [+]poststarthook/generic-apiserver-start-informers ok Nov 25 10:34:20 crc kubenswrapper[4813]: [+]poststarthook/max-in-flight-filter ok Nov 25 10:34:20 crc kubenswrapper[4813]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 25 10:34:20 crc kubenswrapper[4813]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 25 10:34:20 crc kubenswrapper[4813]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 25 10:34:20 crc kubenswrapper[4813]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 25 10:34:20 crc kubenswrapper[4813]: [+]poststarthook/project.openshift.io-projectcache ok Nov 25 10:34:20 crc kubenswrapper[4813]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 25 10:34:20 crc kubenswrapper[4813]: [+]poststarthook/openshift.io-startinformers ok Nov 25 10:34:20 crc kubenswrapper[4813]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 25 10:34:20 crc kubenswrapper[4813]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 25 10:34:20 crc kubenswrapper[4813]: livez check failed Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.001068 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" podUID="7a70dbef-bca6-47b6-8814-424cc0cbf441" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.104753 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" podStartSLOduration=136.104721925 podStartE2EDuration="2m16.104721925s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:34:20.082017023 +0000 UTC m=+157.211726919" watchObservedRunningTime="2025-11-25 10:34:20.104721925 +0000 UTC m=+157.234431831" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.108743 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6lfx\" (UniqueName: \"kubernetes.io/projected/02d07673-818e-4645-9360-fa2300714f4c-kube-api-access-r6lfx\") pod \"redhat-marketplace-tl6vn\" (UID: \"02d07673-818e-4645-9360-fa2300714f4c\") " pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.109081 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d07673-818e-4645-9360-fa2300714f4c-utilities\") pod \"redhat-marketplace-tl6vn\" (UID: \"02d07673-818e-4645-9360-fa2300714f4c\") " pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.109190 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d07673-818e-4645-9360-fa2300714f4c-catalog-content\") pod \"redhat-marketplace-tl6vn\" (UID: \"02d07673-818e-4645-9360-fa2300714f4c\") " pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.213234 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6lfx\" (UniqueName: \"kubernetes.io/projected/02d07673-818e-4645-9360-fa2300714f4c-kube-api-access-r6lfx\") pod \"redhat-marketplace-tl6vn\" (UID: \"02d07673-818e-4645-9360-fa2300714f4c\") " pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.213742 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d07673-818e-4645-9360-fa2300714f4c-utilities\") pod \"redhat-marketplace-tl6vn\" (UID: \"02d07673-818e-4645-9360-fa2300714f4c\") " pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.213772 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d07673-818e-4645-9360-fa2300714f4c-catalog-content\") pod \"redhat-marketplace-tl6vn\" (UID: \"02d07673-818e-4645-9360-fa2300714f4c\") " pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.217133 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d07673-818e-4645-9360-fa2300714f4c-catalog-content\") pod \"redhat-marketplace-tl6vn\" (UID: \"02d07673-818e-4645-9360-fa2300714f4c\") " pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.217366 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d07673-818e-4645-9360-fa2300714f4c-utilities\") pod \"redhat-marketplace-tl6vn\" (UID: \"02d07673-818e-4645-9360-fa2300714f4c\") " pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.240274 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6lfx\" (UniqueName: \"kubernetes.io/projected/02d07673-818e-4645-9360-fa2300714f4c-kube-api-access-r6lfx\") pod \"redhat-marketplace-tl6vn\" (UID: \"02d07673-818e-4645-9360-fa2300714f4c\") " pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.310628 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4dl7"] Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.486007 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hw9jq"] Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.487924 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.491031 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.496521 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hw9jq"] Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.518040 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfm2n\" (UniqueName: \"kubernetes.io/projected/40aa702b-1c32-45ad-ba16-ae04f8da0675-kube-api-access-kfm2n\") pod \"redhat-operators-hw9jq\" (UID: \"40aa702b-1c32-45ad-ba16-ae04f8da0675\") " pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.518473 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40aa702b-1c32-45ad-ba16-ae04f8da0675-utilities\") pod \"redhat-operators-hw9jq\" (UID: \"40aa702b-1c32-45ad-ba16-ae04f8da0675\") " pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.518571 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40aa702b-1c32-45ad-ba16-ae04f8da0675-catalog-content\") pod \"redhat-operators-hw9jq\" (UID: \"40aa702b-1c32-45ad-ba16-ae04f8da0675\") " pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.540964 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.619306 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfm2n\" (UniqueName: \"kubernetes.io/projected/40aa702b-1c32-45ad-ba16-ae04f8da0675-kube-api-access-kfm2n\") pod \"redhat-operators-hw9jq\" (UID: \"40aa702b-1c32-45ad-ba16-ae04f8da0675\") " pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.619385 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40aa702b-1c32-45ad-ba16-ae04f8da0675-utilities\") pod \"redhat-operators-hw9jq\" (UID: \"40aa702b-1c32-45ad-ba16-ae04f8da0675\") " pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.619411 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40aa702b-1c32-45ad-ba16-ae04f8da0675-catalog-content\") pod \"redhat-operators-hw9jq\" (UID: \"40aa702b-1c32-45ad-ba16-ae04f8da0675\") " pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.620212 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40aa702b-1c32-45ad-ba16-ae04f8da0675-utilities\") pod \"redhat-operators-hw9jq\" (UID: \"40aa702b-1c32-45ad-ba16-ae04f8da0675\") " pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.620216 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40aa702b-1c32-45ad-ba16-ae04f8da0675-catalog-content\") pod \"redhat-operators-hw9jq\" (UID: \"40aa702b-1c32-45ad-ba16-ae04f8da0675\") " pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.651555 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfm2n\" (UniqueName: \"kubernetes.io/projected/40aa702b-1c32-45ad-ba16-ae04f8da0675-kube-api-access-kfm2n\") pod \"redhat-operators-hw9jq\" (UID: \"40aa702b-1c32-45ad-ba16-ae04f8da0675\") " pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.812468 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.892744 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q6vc6"] Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.894079 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.908355 4813 generic.go:334] "Generic (PLEG): container finished" podID="070d675e-0557-4a9e-9a9a-1a5019547e2a" containerID="4b4f61792dc8d30f13a72f510de745ee9c8baed57fea9824952db21e563bcd6c" exitCode=0 Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.908417 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4dl7" event={"ID":"070d675e-0557-4a9e-9a9a-1a5019547e2a","Type":"ContainerDied","Data":"4b4f61792dc8d30f13a72f510de745ee9c8baed57fea9824952db21e563bcd6c"} Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.908444 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4dl7" event={"ID":"070d675e-0557-4a9e-9a9a-1a5019547e2a","Type":"ContainerStarted","Data":"7b90b227bd233c35a032baf51cfd419bfdb84a1cf39fdb58b6b1e85863229d90"} Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.912038 4813 generic.go:334] "Generic (PLEG): container finished" podID="2ff51e6c-cd70-4242-8e66-0bcc9b3e7388" containerID="b7fb6ebf10a44a5e4b6b47c3855b97c6cc8b030b3f9c5cb3e4d3445811ff15e4" exitCode=0 Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.912571 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388","Type":"ContainerDied","Data":"b7fb6ebf10a44a5e4b6b47c3855b97c6cc8b030b3f9c5cb3e4d3445811ff15e4"} Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.916145 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6vc6"] Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.977549 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:20 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:20 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:20 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:20 crc kubenswrapper[4813]: I1125 10:34:20.977625 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.029373 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khkg4\" (UniqueName: \"kubernetes.io/projected/1079a291-b9f2-4f78-b720-46f0893f5b88-kube-api-access-khkg4\") pod \"redhat-operators-q6vc6\" (UID: \"1079a291-b9f2-4f78-b720-46f0893f5b88\") " pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.029467 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1079a291-b9f2-4f78-b720-46f0893f5b88-utilities\") pod \"redhat-operators-q6vc6\" (UID: \"1079a291-b9f2-4f78-b720-46f0893f5b88\") " pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.029501 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1079a291-b9f2-4f78-b720-46f0893f5b88-catalog-content\") pod \"redhat-operators-q6vc6\" (UID: \"1079a291-b9f2-4f78-b720-46f0893f5b88\") " pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.057544 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl6vn"] Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.058925 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lsg8f" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.091322 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hw9jq"] Nov 25 10:34:21 crc kubenswrapper[4813]: W1125 10:34:21.100014 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40aa702b_1c32_45ad_ba16_ae04f8da0675.slice/crio-62c7f2ba3c9453acc22ad94a75daed31aee222faf3a5223fcafc4d7e43f26e8f WatchSource:0}: Error finding container 62c7f2ba3c9453acc22ad94a75daed31aee222faf3a5223fcafc4d7e43f26e8f: Status 404 returned error can't find the container with id 62c7f2ba3c9453acc22ad94a75daed31aee222faf3a5223fcafc4d7e43f26e8f Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.131296 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1079a291-b9f2-4f78-b720-46f0893f5b88-utilities\") pod \"redhat-operators-q6vc6\" (UID: \"1079a291-b9f2-4f78-b720-46f0893f5b88\") " pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.131358 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1079a291-b9f2-4f78-b720-46f0893f5b88-catalog-content\") pod \"redhat-operators-q6vc6\" (UID: \"1079a291-b9f2-4f78-b720-46f0893f5b88\") " pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.131443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khkg4\" (UniqueName: \"kubernetes.io/projected/1079a291-b9f2-4f78-b720-46f0893f5b88-kube-api-access-khkg4\") pod \"redhat-operators-q6vc6\" (UID: \"1079a291-b9f2-4f78-b720-46f0893f5b88\") " pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.131910 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1079a291-b9f2-4f78-b720-46f0893f5b88-catalog-content\") pod \"redhat-operators-q6vc6\" (UID: \"1079a291-b9f2-4f78-b720-46f0893f5b88\") " pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.131936 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1079a291-b9f2-4f78-b720-46f0893f5b88-utilities\") pod \"redhat-operators-q6vc6\" (UID: \"1079a291-b9f2-4f78-b720-46f0893f5b88\") " pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.154081 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khkg4\" (UniqueName: \"kubernetes.io/projected/1079a291-b9f2-4f78-b720-46f0893f5b88-kube-api-access-khkg4\") pod \"redhat-operators-q6vc6\" (UID: \"1079a291-b9f2-4f78-b720-46f0893f5b88\") " pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.214320 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.511860 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6vc6"] Nov 25 10:34:21 crc kubenswrapper[4813]: W1125 10:34:21.520997 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1079a291_b9f2_4f78_b720_46f0893f5b88.slice/crio-2653be864821a06a36762a981b8802f0532a2c2e7242ec617b99cb38897ff6c7 WatchSource:0}: Error finding container 2653be864821a06a36762a981b8802f0532a2c2e7242ec617b99cb38897ff6c7: Status 404 returned error can't find the container with id 2653be864821a06a36762a981b8802f0532a2c2e7242ec617b99cb38897ff6c7 Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.920244 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw9jq" event={"ID":"40aa702b-1c32-45ad-ba16-ae04f8da0675","Type":"ContainerStarted","Data":"62c7f2ba3c9453acc22ad94a75daed31aee222faf3a5223fcafc4d7e43f26e8f"} Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.921352 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6vc6" event={"ID":"1079a291-b9f2-4f78-b720-46f0893f5b88","Type":"ContainerStarted","Data":"2653be864821a06a36762a981b8802f0532a2c2e7242ec617b99cb38897ff6c7"} Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.922387 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6vn" event={"ID":"02d07673-818e-4645-9360-fa2300714f4c","Type":"ContainerStarted","Data":"73f870051059563b90ff6907643cbb66072ed013ad1c9ddc796f9bf14ecfeda5"} Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.967638 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.968043 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.975615 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:21 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:21 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:21 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:21 crc kubenswrapper[4813]: I1125 10:34:21.975706 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.194547 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.348342 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ff51e6c-cd70-4242-8e66-0bcc9b3e7388-kubelet-dir\") pod \"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388\" (UID: \"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388\") " Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.348931 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ff51e6c-cd70-4242-8e66-0bcc9b3e7388-kube-api-access\") pod \"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388\" (UID: \"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388\") " Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.349523 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ff51e6c-cd70-4242-8e66-0bcc9b3e7388-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2ff51e6c-cd70-4242-8e66-0bcc9b3e7388" (UID: "2ff51e6c-cd70-4242-8e66-0bcc9b3e7388"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.355134 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff51e6c-cd70-4242-8e66-0bcc9b3e7388-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2ff51e6c-cd70-4242-8e66-0bcc9b3e7388" (UID: "2ff51e6c-cd70-4242-8e66-0bcc9b3e7388"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.453817 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ff51e6c-cd70-4242-8e66-0bcc9b3e7388-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.453881 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ff51e6c-cd70-4242-8e66-0bcc9b3e7388-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.933288 4813 generic.go:334] "Generic (PLEG): container finished" podID="40aa702b-1c32-45ad-ba16-ae04f8da0675" containerID="b806ff102fd4368644691419dda058689f65b84266b2eaa72921889ec09c1771" exitCode=0 Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.933710 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw9jq" event={"ID":"40aa702b-1c32-45ad-ba16-ae04f8da0675","Type":"ContainerDied","Data":"b806ff102fd4368644691419dda058689f65b84266b2eaa72921889ec09c1771"} Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.938211 4813 generic.go:334] "Generic (PLEG): container finished" podID="1079a291-b9f2-4f78-b720-46f0893f5b88" containerID="c6868867638f5d4955847dba8cef1b80a30ef60a60835be232ed8293b5ce8e80" exitCode=0 Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.938425 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6vc6" event={"ID":"1079a291-b9f2-4f78-b720-46f0893f5b88","Type":"ContainerDied","Data":"c6868867638f5d4955847dba8cef1b80a30ef60a60835be232ed8293b5ce8e80"} Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.943826 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2ff51e6c-cd70-4242-8e66-0bcc9b3e7388","Type":"ContainerDied","Data":"5ba7c6610fc3b32ba444bec62c0929b3c5c7711e820f5a05fcb822e79100ac0c"} Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.943869 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ba7c6610fc3b32ba444bec62c0929b3c5c7711e820f5a05fcb822e79100ac0c" Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.943935 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.949706 4813 generic.go:334] "Generic (PLEG): container finished" podID="02d07673-818e-4645-9360-fa2300714f4c" containerID="820dae5de165fdb2c9a01c13caae86fcd4e828f770b264d462ea8ed108e95f3e" exitCode=0 Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.949757 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6vn" event={"ID":"02d07673-818e-4645-9360-fa2300714f4c","Type":"ContainerDied","Data":"820dae5de165fdb2c9a01c13caae86fcd4e828f770b264d462ea8ed108e95f3e"} Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.972535 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:22 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:22 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:22 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:22 crc kubenswrapper[4813]: I1125 10:34:22.972622 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:23 crc kubenswrapper[4813]: I1125 10:34:23.971332 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:23 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:23 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:23 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:23 crc kubenswrapper[4813]: I1125 10:34:23.971408 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:24 crc kubenswrapper[4813]: I1125 10:34:24.527519 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:34:24 crc kubenswrapper[4813]: I1125 10:34:24.972934 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:24 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:24 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:24 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:24 crc kubenswrapper[4813]: I1125 10:34:24.973337 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:24 crc kubenswrapper[4813]: I1125 10:34:24.973704 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:24 crc kubenswrapper[4813]: I1125 10:34:24.978427 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5ngzq" Nov 25 10:34:25 crc kubenswrapper[4813]: I1125 10:34:25.472867 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:34:25 crc kubenswrapper[4813]: I1125 10:34:25.472921 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:34:25 crc kubenswrapper[4813]: I1125 10:34:25.473009 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:34:25 crc kubenswrapper[4813]: I1125 10:34:25.473097 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:34:25 crc kubenswrapper[4813]: I1125 10:34:25.972059 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:25 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:25 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:25 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:25 crc kubenswrapper[4813]: I1125 10:34:25.972121 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:26 crc kubenswrapper[4813]: I1125 10:34:26.389792 4813 patch_prober.go:28] interesting pod/console-f9d7485db-rpfp2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 25 10:34:26 crc kubenswrapper[4813]: I1125 10:34:26.389849 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rpfp2" podUID="17571cbf-de36-4b34-af0b-3db7493adaf4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 25 10:34:26 crc kubenswrapper[4813]: I1125 10:34:26.972303 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:26 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:26 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:26 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:26 crc kubenswrapper[4813]: I1125 10:34:26.972353 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:27 crc kubenswrapper[4813]: I1125 10:34:27.825582 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:34:27 crc kubenswrapper[4813]: I1125 10:34:27.835089 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2-metrics-certs\") pod \"network-metrics-daemon-w28xl\" (UID: \"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2\") " pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:34:27 crc kubenswrapper[4813]: I1125 10:34:27.841233 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w28xl" Nov 25 10:34:27 crc kubenswrapper[4813]: I1125 10:34:27.971668 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:27 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:27 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:27 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:27 crc kubenswrapper[4813]: I1125 10:34:27.971874 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:28 crc kubenswrapper[4813]: I1125 10:34:28.971576 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:28 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:28 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:28 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:28 crc kubenswrapper[4813]: I1125 10:34:28.971651 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:29 crc kubenswrapper[4813]: I1125 10:34:29.970792 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:29 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:29 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:29 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:29 crc kubenswrapper[4813]: I1125 10:34:29.970870 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:30 crc kubenswrapper[4813]: I1125 10:34:30.970311 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:30 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:30 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:30 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:30 crc kubenswrapper[4813]: I1125 10:34:30.970430 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:31 crc kubenswrapper[4813]: I1125 10:34:31.971134 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:31 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:31 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:31 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:31 crc kubenswrapper[4813]: I1125 10:34:31.971263 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:32 crc kubenswrapper[4813]: I1125 10:34:32.971649 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:32 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Nov 25 10:34:32 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:32 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:32 crc kubenswrapper[4813]: I1125 10:34:32.972253 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:33 crc kubenswrapper[4813]: I1125 10:34:33.970817 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 10:34:33 crc kubenswrapper[4813]: [+]has-synced ok Nov 25 10:34:33 crc kubenswrapper[4813]: [+]process-running ok Nov 25 10:34:33 crc kubenswrapper[4813]: healthz check failed Nov 25 10:34:33 crc kubenswrapper[4813]: I1125 10:34:33.970876 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 10:34:34 crc kubenswrapper[4813]: I1125 10:34:34.972488 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:34 crc kubenswrapper[4813]: I1125 10:34:34.977395 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hvj2g" Nov 25 10:34:35 crc kubenswrapper[4813]: I1125 10:34:35.473325 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:34:35 crc kubenswrapper[4813]: I1125 10:34:35.473394 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:34:35 crc kubenswrapper[4813]: I1125 10:34:35.473549 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:34:35 crc kubenswrapper[4813]: I1125 10:34:35.473722 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:34:35 crc kubenswrapper[4813]: I1125 10:34:35.473856 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-482dq" Nov 25 10:34:35 crc kubenswrapper[4813]: I1125 10:34:35.474637 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:34:35 crc kubenswrapper[4813]: I1125 10:34:35.474676 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:34:35 crc kubenswrapper[4813]: I1125 10:34:35.475253 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"fca302ad7b4801f6d55e5464ef2f6bc64ce853c553ac4696261fb261ae51b113"} pod="openshift-console/downloads-7954f5f757-482dq" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 25 10:34:35 crc kubenswrapper[4813]: I1125 10:34:35.475481 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" containerID="cri-o://fca302ad7b4801f6d55e5464ef2f6bc64ce853c553ac4696261fb261ae51b113" gracePeriod=2 Nov 25 10:34:36 crc kubenswrapper[4813]: I1125 10:34:36.674435 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:36 crc kubenswrapper[4813]: I1125 10:34:36.680463 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:34:37 crc kubenswrapper[4813]: I1125 10:34:37.048291 4813 generic.go:334] "Generic (PLEG): container finished" podID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerID="fca302ad7b4801f6d55e5464ef2f6bc64ce853c553ac4696261fb261ae51b113" exitCode=0 Nov 25 10:34:37 crc kubenswrapper[4813]: I1125 10:34:37.048394 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-482dq" event={"ID":"a4fc4e54-61da-43ab-934e-5f7ed6178ab6","Type":"ContainerDied","Data":"fca302ad7b4801f6d55e5464ef2f6bc64ce853c553ac4696261fb261ae51b113"} Nov 25 10:34:39 crc kubenswrapper[4813]: I1125 10:34:39.102830 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:34:45 crc kubenswrapper[4813]: I1125 10:34:45.473739 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:34:45 crc kubenswrapper[4813]: I1125 10:34:45.474595 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:34:45 crc kubenswrapper[4813]: I1125 10:34:45.744609 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nklcx" Nov 25 10:34:51 crc kubenswrapper[4813]: I1125 10:34:51.967393 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:34:51 crc kubenswrapper[4813]: I1125 10:34:51.968076 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:34:52 crc kubenswrapper[4813]: I1125 10:34:52.730013 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 10:34:55 crc kubenswrapper[4813]: I1125 10:34:55.475720 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:34:55 crc kubenswrapper[4813]: I1125 10:34:55.476029 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:35:03 crc kubenswrapper[4813]: E1125 10:35:03.351327 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 10:35:03 crc kubenswrapper[4813]: E1125 10:35:03.352770 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6lfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tl6vn_openshift-marketplace(02d07673-818e-4645-9360-fa2300714f4c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 10:35:03 crc kubenswrapper[4813]: E1125 10:35:03.354498 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tl6vn" podUID="02d07673-818e-4645-9360-fa2300714f4c" Nov 25 10:35:05 crc kubenswrapper[4813]: I1125 10:35:05.476084 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:35:05 crc kubenswrapper[4813]: I1125 10:35:05.476181 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:35:11 crc kubenswrapper[4813]: E1125 10:35:11.875904 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-tl6vn" podUID="02d07673-818e-4645-9360-fa2300714f4c" Nov 25 10:35:15 crc kubenswrapper[4813]: I1125 10:35:15.473388 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:35:15 crc kubenswrapper[4813]: I1125 10:35:15.473458 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:35:16 crc kubenswrapper[4813]: E1125 10:35:16.984025 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 10:35:16 crc kubenswrapper[4813]: E1125 10:35:16.984495 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcnh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mv2q6_openshift-marketplace(4c9a79a8-32f8-4018-b6e7-a76164389632): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 10:35:16 crc kubenswrapper[4813]: E1125 10:35:16.985724 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mv2q6" podUID="4c9a79a8-32f8-4018-b6e7-a76164389632" Nov 25 10:35:17 crc kubenswrapper[4813]: E1125 10:35:17.910622 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 10:35:17 crc kubenswrapper[4813]: E1125 10:35:17.910785 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6clhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-km2jk_openshift-marketplace(e31e8556-76b9-48db-a630-4a990e4a432a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 10:35:17 crc kubenswrapper[4813]: E1125 10:35:17.912028 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-km2jk" podUID="e31e8556-76b9-48db-a630-4a990e4a432a" Nov 25 10:35:18 crc kubenswrapper[4813]: E1125 10:35:18.334972 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 10:35:18 crc kubenswrapper[4813]: E1125 10:35:18.335144 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d4xgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-k4dl7_openshift-marketplace(070d675e-0557-4a9e-9a9a-1a5019547e2a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 10:35:18 crc kubenswrapper[4813]: E1125 10:35:18.336315 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-k4dl7" podUID="070d675e-0557-4a9e-9a9a-1a5019547e2a" Nov 25 10:35:21 crc kubenswrapper[4813]: I1125 10:35:21.967890 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:35:21 crc kubenswrapper[4813]: I1125 10:35:21.968910 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:35:21 crc kubenswrapper[4813]: I1125 10:35:21.968994 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:35:21 crc kubenswrapper[4813]: I1125 10:35:21.969552 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627"} pod="openshift-machine-config-operator/machine-config-daemon-knhz8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:35:21 crc kubenswrapper[4813]: I1125 10:35:21.969609 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" containerID="cri-o://c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627" gracePeriod=600 Nov 25 10:35:25 crc kubenswrapper[4813]: I1125 10:35:25.472921 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:35:25 crc kubenswrapper[4813]: I1125 10:35:25.473272 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:35:35 crc kubenswrapper[4813]: I1125 10:35:35.473023 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:35:35 crc kubenswrapper[4813]: I1125 10:35:35.473930 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:35:38 crc kubenswrapper[4813]: E1125 10:35:38.827428 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 10:35:38 crc kubenswrapper[4813]: E1125 10:35:38.828080 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-khkg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-q6vc6_openshift-marketplace(1079a291-b9f2-4f78-b720-46f0893f5b88): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 10:35:38 crc kubenswrapper[4813]: E1125 10:35:38.829335 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-q6vc6" podUID="1079a291-b9f2-4f78-b720-46f0893f5b88" Nov 25 10:35:39 crc kubenswrapper[4813]: I1125 10:35:39.430910 4813 generic.go:334] "Generic (PLEG): container finished" podID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerID="c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627" exitCode=0 Nov 25 10:35:39 crc kubenswrapper[4813]: I1125 10:35:39.430982 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerDied","Data":"c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627"} Nov 25 10:35:45 crc kubenswrapper[4813]: I1125 10:35:45.472674 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:35:45 crc kubenswrapper[4813]: I1125 10:35:45.473079 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:35:46 crc kubenswrapper[4813]: E1125 10:35:46.623671 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-q6vc6" podUID="1079a291-b9f2-4f78-b720-46f0893f5b88" Nov 25 10:35:46 crc kubenswrapper[4813]: I1125 10:35:46.824376 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-w28xl"] Nov 25 10:35:47 crc kubenswrapper[4813]: W1125 10:35:47.076581 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74ce4ebf_54d7_48e4_bc5e_e5e8e49c37d2.slice/crio-505219556cf8f38265a35667077f53bfc2fd0cc4a3577c9f4cf1f6405ca7925a WatchSource:0}: Error finding container 505219556cf8f38265a35667077f53bfc2fd0cc4a3577c9f4cf1f6405ca7925a: Status 404 returned error can't find the container with id 505219556cf8f38265a35667077f53bfc2fd0cc4a3577c9f4cf1f6405ca7925a Nov 25 10:35:47 crc kubenswrapper[4813]: I1125 10:35:47.491711 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w28xl" event={"ID":"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2","Type":"ContainerStarted","Data":"505219556cf8f38265a35667077f53bfc2fd0cc4a3577c9f4cf1f6405ca7925a"} Nov 25 10:35:51 crc kubenswrapper[4813]: I1125 10:35:51.517714 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-482dq" event={"ID":"a4fc4e54-61da-43ab-934e-5f7ed6178ab6","Type":"ContainerStarted","Data":"754d83340a7f1f27976d7807e0c8a5831c5277ebd27c0a026d35a525eab51307"} Nov 25 10:35:51 crc kubenswrapper[4813]: E1125 10:35:51.704143 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 10:35:51 crc kubenswrapper[4813]: E1125 10:35:51.704435 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6xgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-trh25_openshift-marketplace(9502b5b8-d91d-43fa-9498-4daeb00dd6ba): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 10:35:51 crc kubenswrapper[4813]: E1125 10:35:51.705719 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-trh25" podUID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" Nov 25 10:35:51 crc kubenswrapper[4813]: E1125 10:35:51.722650 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 10:35:51 crc kubenswrapper[4813]: E1125 10:35:51.722882 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfm2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hw9jq_openshift-marketplace(40aa702b-1c32-45ad-ba16-ae04f8da0675): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 10:35:51 crc kubenswrapper[4813]: E1125 10:35:51.724142 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hw9jq" podUID="40aa702b-1c32-45ad-ba16-ae04f8da0675" Nov 25 10:35:52 crc kubenswrapper[4813]: E1125 10:35:52.526174 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hw9jq" podUID="40aa702b-1c32-45ad-ba16-ae04f8da0675" Nov 25 10:35:52 crc kubenswrapper[4813]: E1125 10:35:52.526176 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-trh25" podUID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" Nov 25 10:35:53 crc kubenswrapper[4813]: I1125 10:35:53.531594 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w28xl" event={"ID":"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2","Type":"ContainerStarted","Data":"6ff2b27a19a6b255bf8cff843a15697e44f299e3213b51f0d78ead94b6523861"} Nov 25 10:35:54 crc kubenswrapper[4813]: I1125 10:35:54.546858 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerStarted","Data":"fd431f86e1d06ef8a5974d3c06c01f9d692e72510667f2b9fb8a07e82ce4af6d"} Nov 25 10:35:55 crc kubenswrapper[4813]: I1125 10:35:55.473154 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:35:55 crc kubenswrapper[4813]: I1125 10:35:55.473782 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:35:55 crc kubenswrapper[4813]: I1125 10:35:55.560077 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-482dq" Nov 25 10:35:55 crc kubenswrapper[4813]: I1125 10:35:55.560806 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:35:55 crc kubenswrapper[4813]: I1125 10:35:55.560885 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:35:56 crc kubenswrapper[4813]: E1125 10:35:56.397506 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 10:35:56 crc kubenswrapper[4813]: E1125 10:35:56.397981 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2vzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rhgxx_openshift-marketplace(a5deac33-30de-491e-94ff-53fe67de0eb8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 10:35:56 crc kubenswrapper[4813]: E1125 10:35:56.399186 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rhgxx" podUID="a5deac33-30de-491e-94ff-53fe67de0eb8" Nov 25 10:35:56 crc kubenswrapper[4813]: I1125 10:35:56.564381 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:35:56 crc kubenswrapper[4813]: I1125 10:35:56.564453 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:35:56 crc kubenswrapper[4813]: E1125 10:35:56.567071 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rhgxx" podUID="a5deac33-30de-491e-94ff-53fe67de0eb8" Nov 25 10:35:57 crc kubenswrapper[4813]: I1125 10:35:57.575658 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w28xl" event={"ID":"74ce4ebf-54d7-48e4-bc5e-e5e8e49c37d2","Type":"ContainerStarted","Data":"28d9c85759c13585b214f51a32c4956015ca6aadcd71bb203a92d15771f9c9d8"} Nov 25 10:35:59 crc kubenswrapper[4813]: I1125 10:35:59.606360 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-w28xl" podStartSLOduration=235.606344997 podStartE2EDuration="3m55.606344997s" podCreationTimestamp="2025-11-25 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:35:59.60385957 +0000 UTC m=+256.733569506" watchObservedRunningTime="2025-11-25 10:35:59.606344997 +0000 UTC m=+256.736054883" Nov 25 10:36:05 crc kubenswrapper[4813]: I1125 10:36:05.473344 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:36:05 crc kubenswrapper[4813]: I1125 10:36:05.473388 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-482dq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 25 10:36:05 crc kubenswrapper[4813]: I1125 10:36:05.473648 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:36:05 crc kubenswrapper[4813]: I1125 10:36:05.473710 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-482dq" podUID="a4fc4e54-61da-43ab-934e-5f7ed6178ab6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 25 10:36:15 crc kubenswrapper[4813]: I1125 10:36:15.489435 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-482dq" Nov 25 10:36:17 crc kubenswrapper[4813]: I1125 10:36:17.010973 4813 patch_prober.go:28] interesting pod/router-default-5444994796-hvj2g container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 10:36:17 crc kubenswrapper[4813]: I1125 10:36:17.012044 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-hvj2g" podUID="b5580f94-06e3-4a91-b3e8-b1d7962438dd" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 10:36:49 crc kubenswrapper[4813]: I1125 10:36:49.909142 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6vc6" event={"ID":"1079a291-b9f2-4f78-b720-46f0893f5b88","Type":"ContainerStarted","Data":"40f80ef0a9d8bb303750eee465fd1d14f5336fe73089ffdf03d605a1cba3e3d3"} Nov 25 10:36:49 crc kubenswrapper[4813]: I1125 10:36:49.911146 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-km2jk" event={"ID":"e31e8556-76b9-48db-a630-4a990e4a432a","Type":"ContainerStarted","Data":"00d8db0d84ea9432bfc419179d3f456f7b92b6e64c4dff1b2994bd27066bb69c"} Nov 25 10:36:49 crc kubenswrapper[4813]: I1125 10:36:49.912536 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhgxx" event={"ID":"a5deac33-30de-491e-94ff-53fe67de0eb8","Type":"ContainerStarted","Data":"d7cbf2acf109b4d343c9a24742f2a1c733160bda4bf08a104fc3bb082b999d80"} Nov 25 10:36:49 crc kubenswrapper[4813]: I1125 10:36:49.914155 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trh25" event={"ID":"9502b5b8-d91d-43fa-9498-4daeb00dd6ba","Type":"ContainerStarted","Data":"c53ea92f8fb186235636b1ec4cef531b9d6ba4d5e581c6a26bffb7fb6be2084b"} Nov 25 10:36:49 crc kubenswrapper[4813]: I1125 10:36:49.916028 4813 generic.go:334] "Generic (PLEG): container finished" podID="070d675e-0557-4a9e-9a9a-1a5019547e2a" containerID="87ff2b293d8bf7fa43e377e7ac812509b7d082ac5e6f214fc3a8a21b80f3dbfa" exitCode=0 Nov 25 10:36:49 crc kubenswrapper[4813]: I1125 10:36:49.916087 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4dl7" event={"ID":"070d675e-0557-4a9e-9a9a-1a5019547e2a","Type":"ContainerDied","Data":"87ff2b293d8bf7fa43e377e7ac812509b7d082ac5e6f214fc3a8a21b80f3dbfa"} Nov 25 10:36:49 crc kubenswrapper[4813]: I1125 10:36:49.917895 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6vn" event={"ID":"02d07673-818e-4645-9360-fa2300714f4c","Type":"ContainerStarted","Data":"f219307fb64ce925ea9e11c7d2f62f2438c4a3b454cb830729d198842480d8bc"} Nov 25 10:36:49 crc kubenswrapper[4813]: I1125 10:36:49.920359 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mv2q6" event={"ID":"4c9a79a8-32f8-4018-b6e7-a76164389632","Type":"ContainerStarted","Data":"008535fd521e688e1160f8f654c4585daa80aa8f99d8e87eccbf4a4ad1d5224f"} Nov 25 10:36:49 crc kubenswrapper[4813]: I1125 10:36:49.921911 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw9jq" event={"ID":"40aa702b-1c32-45ad-ba16-ae04f8da0675","Type":"ContainerStarted","Data":"553135b323032b4e718509fc5757e2ffb7b5060550a2533a32ac27f91ff6ecc2"} Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.929491 4813 generic.go:334] "Generic (PLEG): container finished" podID="40aa702b-1c32-45ad-ba16-ae04f8da0675" containerID="553135b323032b4e718509fc5757e2ffb7b5060550a2533a32ac27f91ff6ecc2" exitCode=0 Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.929556 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw9jq" event={"ID":"40aa702b-1c32-45ad-ba16-ae04f8da0675","Type":"ContainerDied","Data":"553135b323032b4e718509fc5757e2ffb7b5060550a2533a32ac27f91ff6ecc2"} Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.933358 4813 generic.go:334] "Generic (PLEG): container finished" podID="1079a291-b9f2-4f78-b720-46f0893f5b88" containerID="40f80ef0a9d8bb303750eee465fd1d14f5336fe73089ffdf03d605a1cba3e3d3" exitCode=0 Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.933456 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6vc6" event={"ID":"1079a291-b9f2-4f78-b720-46f0893f5b88","Type":"ContainerDied","Data":"40f80ef0a9d8bb303750eee465fd1d14f5336fe73089ffdf03d605a1cba3e3d3"} Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.938000 4813 generic.go:334] "Generic (PLEG): container finished" podID="e31e8556-76b9-48db-a630-4a990e4a432a" containerID="00d8db0d84ea9432bfc419179d3f456f7b92b6e64c4dff1b2994bd27066bb69c" exitCode=0 Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.938062 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-km2jk" event={"ID":"e31e8556-76b9-48db-a630-4a990e4a432a","Type":"ContainerDied","Data":"00d8db0d84ea9432bfc419179d3f456f7b92b6e64c4dff1b2994bd27066bb69c"} Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.941135 4813 generic.go:334] "Generic (PLEG): container finished" podID="a5deac33-30de-491e-94ff-53fe67de0eb8" containerID="d7cbf2acf109b4d343c9a24742f2a1c733160bda4bf08a104fc3bb082b999d80" exitCode=0 Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.941186 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhgxx" event={"ID":"a5deac33-30de-491e-94ff-53fe67de0eb8","Type":"ContainerDied","Data":"d7cbf2acf109b4d343c9a24742f2a1c733160bda4bf08a104fc3bb082b999d80"} Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.942644 4813 generic.go:334] "Generic (PLEG): container finished" podID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" containerID="c53ea92f8fb186235636b1ec4cef531b9d6ba4d5e581c6a26bffb7fb6be2084b" exitCode=0 Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.942778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trh25" event={"ID":"9502b5b8-d91d-43fa-9498-4daeb00dd6ba","Type":"ContainerDied","Data":"c53ea92f8fb186235636b1ec4cef531b9d6ba4d5e581c6a26bffb7fb6be2084b"} Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.946357 4813 generic.go:334] "Generic (PLEG): container finished" podID="02d07673-818e-4645-9360-fa2300714f4c" containerID="f219307fb64ce925ea9e11c7d2f62f2438c4a3b454cb830729d198842480d8bc" exitCode=0 Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.946481 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6vn" event={"ID":"02d07673-818e-4645-9360-fa2300714f4c","Type":"ContainerDied","Data":"f219307fb64ce925ea9e11c7d2f62f2438c4a3b454cb830729d198842480d8bc"} Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.950928 4813 generic.go:334] "Generic (PLEG): container finished" podID="4c9a79a8-32f8-4018-b6e7-a76164389632" containerID="008535fd521e688e1160f8f654c4585daa80aa8f99d8e87eccbf4a4ad1d5224f" exitCode=0 Nov 25 10:36:50 crc kubenswrapper[4813]: I1125 10:36:50.950972 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mv2q6" event={"ID":"4c9a79a8-32f8-4018-b6e7-a76164389632","Type":"ContainerDied","Data":"008535fd521e688e1160f8f654c4585daa80aa8f99d8e87eccbf4a4ad1d5224f"} Nov 25 10:36:53 crc kubenswrapper[4813]: I1125 10:36:53.967903 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw9jq" event={"ID":"40aa702b-1c32-45ad-ba16-ae04f8da0675","Type":"ContainerStarted","Data":"96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04"} Nov 25 10:36:53 crc kubenswrapper[4813]: I1125 10:36:53.970844 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6vn" event={"ID":"02d07673-818e-4645-9360-fa2300714f4c","Type":"ContainerStarted","Data":"da4bee3dbe8d6a223bf7e5611be930b1273ea9cb8f19148f1e34ac7581839f5b"} Nov 25 10:36:54 crc kubenswrapper[4813]: I1125 10:36:54.002790 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hw9jq" podStartSLOduration=4.1433615 podStartE2EDuration="2m34.002765767s" podCreationTimestamp="2025-11-25 10:34:20 +0000 UTC" firstStartedPulling="2025-11-25 10:34:22.937150846 +0000 UTC m=+160.066860732" lastFinishedPulling="2025-11-25 10:36:52.796555103 +0000 UTC m=+309.926264999" observedRunningTime="2025-11-25 10:36:54.000590958 +0000 UTC m=+311.130300904" watchObservedRunningTime="2025-11-25 10:36:54.002765767 +0000 UTC m=+311.132475693" Nov 25 10:36:54 crc kubenswrapper[4813]: I1125 10:36:54.021393 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tl6vn" podStartSLOduration=4.660315264 podStartE2EDuration="2m35.021375707s" podCreationTimestamp="2025-11-25 10:34:19 +0000 UTC" firstStartedPulling="2025-11-25 10:34:22.953259378 +0000 UTC m=+160.082969254" lastFinishedPulling="2025-11-25 10:36:53.314319811 +0000 UTC m=+310.444029697" observedRunningTime="2025-11-25 10:36:54.020271602 +0000 UTC m=+311.149981528" watchObservedRunningTime="2025-11-25 10:36:54.021375707 +0000 UTC m=+311.151085593" Nov 25 10:36:56 crc kubenswrapper[4813]: I1125 10:36:56.997506 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6vc6" event={"ID":"1079a291-b9f2-4f78-b720-46f0893f5b88","Type":"ContainerStarted","Data":"2d0f9ba98f4ab197c0e565a865b846d19e9657dbc81a2a7070df0dc5b104fa9d"} Nov 25 10:36:57 crc kubenswrapper[4813]: I1125 10:36:57.018038 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q6vc6" podStartSLOduration=5.477155504 podStartE2EDuration="2m37.018023177s" podCreationTimestamp="2025-11-25 10:34:20 +0000 UTC" firstStartedPulling="2025-11-25 10:34:22.940295672 +0000 UTC m=+160.070005558" lastFinishedPulling="2025-11-25 10:36:54.481163305 +0000 UTC m=+311.610873231" observedRunningTime="2025-11-25 10:36:57.016671932 +0000 UTC m=+314.146381828" watchObservedRunningTime="2025-11-25 10:36:57.018023177 +0000 UTC m=+314.147733063" Nov 25 10:37:00 crc kubenswrapper[4813]: I1125 10:37:00.016099 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4dl7" event={"ID":"070d675e-0557-4a9e-9a9a-1a5019547e2a","Type":"ContainerStarted","Data":"f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a"} Nov 25 10:37:00 crc kubenswrapper[4813]: I1125 10:37:00.039376 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k4dl7" podStartSLOduration=4.063067226 podStartE2EDuration="2m41.039351257s" podCreationTimestamp="2025-11-25 10:34:19 +0000 UTC" firstStartedPulling="2025-11-25 10:34:20.911102648 +0000 UTC m=+158.040812534" lastFinishedPulling="2025-11-25 10:36:57.887386679 +0000 UTC m=+315.017096565" observedRunningTime="2025-11-25 10:37:00.038115496 +0000 UTC m=+317.167825382" watchObservedRunningTime="2025-11-25 10:37:00.039351257 +0000 UTC m=+317.169061143" Nov 25 10:37:00 crc kubenswrapper[4813]: I1125 10:37:00.541302 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:37:00 crc kubenswrapper[4813]: I1125 10:37:00.541421 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:37:00 crc kubenswrapper[4813]: I1125 10:37:00.813230 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:37:00 crc kubenswrapper[4813]: I1125 10:37:00.813304 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:37:01 crc kubenswrapper[4813]: I1125 10:37:01.215187 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:37:01 crc kubenswrapper[4813]: I1125 10:37:01.215809 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:37:02 crc kubenswrapper[4813]: I1125 10:37:02.142280 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:37:02 crc kubenswrapper[4813]: I1125 10:37:02.190048 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:37:02 crc kubenswrapper[4813]: I1125 10:37:02.380436 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl6vn"] Nov 25 10:37:03 crc kubenswrapper[4813]: I1125 10:37:03.109061 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hw9jq" podUID="40aa702b-1c32-45ad-ba16-ae04f8da0675" containerName="registry-server" probeResult="failure" output=< Nov 25 10:37:03 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Nov 25 10:37:03 crc kubenswrapper[4813]: > Nov 25 10:37:03 crc kubenswrapper[4813]: I1125 10:37:03.110088 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q6vc6" podUID="1079a291-b9f2-4f78-b720-46f0893f5b88" containerName="registry-server" probeResult="failure" output=< Nov 25 10:37:03 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Nov 25 10:37:03 crc kubenswrapper[4813]: > Nov 25 10:37:04 crc kubenswrapper[4813]: I1125 10:37:04.043549 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tl6vn" podUID="02d07673-818e-4645-9360-fa2300714f4c" containerName="registry-server" containerID="cri-o://da4bee3dbe8d6a223bf7e5611be930b1273ea9cb8f19148f1e34ac7581839f5b" gracePeriod=2 Nov 25 10:37:06 crc kubenswrapper[4813]: I1125 10:37:06.056470 4813 generic.go:334] "Generic (PLEG): container finished" podID="02d07673-818e-4645-9360-fa2300714f4c" containerID="da4bee3dbe8d6a223bf7e5611be930b1273ea9cb8f19148f1e34ac7581839f5b" exitCode=0 Nov 25 10:37:06 crc kubenswrapper[4813]: I1125 10:37:06.056561 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6vn" event={"ID":"02d07673-818e-4645-9360-fa2300714f4c","Type":"ContainerDied","Data":"da4bee3dbe8d6a223bf7e5611be930b1273ea9cb8f19148f1e34ac7581839f5b"} Nov 25 10:37:08 crc kubenswrapper[4813]: I1125 10:37:08.426084 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:37:08 crc kubenswrapper[4813]: I1125 10:37:08.481617 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d07673-818e-4645-9360-fa2300714f4c-utilities\") pod \"02d07673-818e-4645-9360-fa2300714f4c\" (UID: \"02d07673-818e-4645-9360-fa2300714f4c\") " Nov 25 10:37:08 crc kubenswrapper[4813]: I1125 10:37:08.481867 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d07673-818e-4645-9360-fa2300714f4c-catalog-content\") pod \"02d07673-818e-4645-9360-fa2300714f4c\" (UID: \"02d07673-818e-4645-9360-fa2300714f4c\") " Nov 25 10:37:08 crc kubenswrapper[4813]: I1125 10:37:08.483100 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d07673-818e-4645-9360-fa2300714f4c-utilities" (OuterVolumeSpecName: "utilities") pod "02d07673-818e-4645-9360-fa2300714f4c" (UID: "02d07673-818e-4645-9360-fa2300714f4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:37:08 crc kubenswrapper[4813]: I1125 10:37:08.483913 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6lfx\" (UniqueName: \"kubernetes.io/projected/02d07673-818e-4645-9360-fa2300714f4c-kube-api-access-r6lfx\") pod \"02d07673-818e-4645-9360-fa2300714f4c\" (UID: \"02d07673-818e-4645-9360-fa2300714f4c\") " Nov 25 10:37:08 crc kubenswrapper[4813]: I1125 10:37:08.484754 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d07673-818e-4645-9360-fa2300714f4c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:08 crc kubenswrapper[4813]: I1125 10:37:08.495095 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d07673-818e-4645-9360-fa2300714f4c-kube-api-access-r6lfx" (OuterVolumeSpecName: "kube-api-access-r6lfx") pod "02d07673-818e-4645-9360-fa2300714f4c" (UID: "02d07673-818e-4645-9360-fa2300714f4c"). InnerVolumeSpecName "kube-api-access-r6lfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:37:08 crc kubenswrapper[4813]: I1125 10:37:08.497531 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d07673-818e-4645-9360-fa2300714f4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02d07673-818e-4645-9360-fa2300714f4c" (UID: "02d07673-818e-4645-9360-fa2300714f4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:37:08 crc kubenswrapper[4813]: I1125 10:37:08.585565 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6lfx\" (UniqueName: \"kubernetes.io/projected/02d07673-818e-4645-9360-fa2300714f4c-kube-api-access-r6lfx\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:08 crc kubenswrapper[4813]: I1125 10:37:08.585616 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d07673-818e-4645-9360-fa2300714f4c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:09 crc kubenswrapper[4813]: I1125 10:37:09.079329 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl6vn" event={"ID":"02d07673-818e-4645-9360-fa2300714f4c","Type":"ContainerDied","Data":"73f870051059563b90ff6907643cbb66072ed013ad1c9ddc796f9bf14ecfeda5"} Nov 25 10:37:09 crc kubenswrapper[4813]: I1125 10:37:09.079452 4813 scope.go:117] "RemoveContainer" containerID="da4bee3dbe8d6a223bf7e5611be930b1273ea9cb8f19148f1e34ac7581839f5b" Nov 25 10:37:09 crc kubenswrapper[4813]: I1125 10:37:09.079458 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl6vn" Nov 25 10:37:09 crc kubenswrapper[4813]: I1125 10:37:09.121629 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl6vn"] Nov 25 10:37:09 crc kubenswrapper[4813]: I1125 10:37:09.125138 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl6vn"] Nov 25 10:37:09 crc kubenswrapper[4813]: I1125 10:37:09.632524 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d07673-818e-4645-9360-fa2300714f4c" path="/var/lib/kubelet/pods/02d07673-818e-4645-9360-fa2300714f4c/volumes" Nov 25 10:37:09 crc kubenswrapper[4813]: I1125 10:37:09.945124 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:37:09 crc kubenswrapper[4813]: I1125 10:37:09.945202 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:37:10 crc kubenswrapper[4813]: I1125 10:37:10.008161 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:37:10 crc kubenswrapper[4813]: I1125 10:37:10.141461 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:37:10 crc kubenswrapper[4813]: I1125 10:37:10.862829 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:37:10 crc kubenswrapper[4813]: I1125 10:37:10.905396 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:37:11 crc kubenswrapper[4813]: I1125 10:37:11.259408 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:37:11 crc kubenswrapper[4813]: I1125 10:37:11.303761 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:37:12 crc kubenswrapper[4813]: I1125 10:37:12.658921 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q6vc6"] Nov 25 10:37:13 crc kubenswrapper[4813]: I1125 10:37:13.101424 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q6vc6" podUID="1079a291-b9f2-4f78-b720-46f0893f5b88" containerName="registry-server" containerID="cri-o://2d0f9ba98f4ab197c0e565a865b846d19e9657dbc81a2a7070df0dc5b104fa9d" gracePeriod=2 Nov 25 10:37:13 crc kubenswrapper[4813]: I1125 10:37:13.482941 4813 scope.go:117] "RemoveContainer" containerID="f219307fb64ce925ea9e11c7d2f62f2438c4a3b454cb830729d198842480d8bc" Nov 25 10:37:15 crc kubenswrapper[4813]: I1125 10:37:15.116363 4813 generic.go:334] "Generic (PLEG): container finished" podID="1079a291-b9f2-4f78-b720-46f0893f5b88" containerID="2d0f9ba98f4ab197c0e565a865b846d19e9657dbc81a2a7070df0dc5b104fa9d" exitCode=0 Nov 25 10:37:15 crc kubenswrapper[4813]: I1125 10:37:15.116409 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6vc6" event={"ID":"1079a291-b9f2-4f78-b720-46f0893f5b88","Type":"ContainerDied","Data":"2d0f9ba98f4ab197c0e565a865b846d19e9657dbc81a2a7070df0dc5b104fa9d"} Nov 25 10:37:16 crc kubenswrapper[4813]: I1125 10:37:16.078771 4813 scope.go:117] "RemoveContainer" containerID="820dae5de165fdb2c9a01c13caae86fcd4e828f770b264d462ea8ed108e95f3e" Nov 25 10:37:16 crc kubenswrapper[4813]: I1125 10:37:16.436469 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:37:16 crc kubenswrapper[4813]: I1125 10:37:16.501491 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khkg4\" (UniqueName: \"kubernetes.io/projected/1079a291-b9f2-4f78-b720-46f0893f5b88-kube-api-access-khkg4\") pod \"1079a291-b9f2-4f78-b720-46f0893f5b88\" (UID: \"1079a291-b9f2-4f78-b720-46f0893f5b88\") " Nov 25 10:37:16 crc kubenswrapper[4813]: I1125 10:37:16.501601 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1079a291-b9f2-4f78-b720-46f0893f5b88-utilities\") pod \"1079a291-b9f2-4f78-b720-46f0893f5b88\" (UID: \"1079a291-b9f2-4f78-b720-46f0893f5b88\") " Nov 25 10:37:16 crc kubenswrapper[4813]: I1125 10:37:16.501631 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1079a291-b9f2-4f78-b720-46f0893f5b88-catalog-content\") pod \"1079a291-b9f2-4f78-b720-46f0893f5b88\" (UID: \"1079a291-b9f2-4f78-b720-46f0893f5b88\") " Nov 25 10:37:16 crc kubenswrapper[4813]: I1125 10:37:16.505931 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1079a291-b9f2-4f78-b720-46f0893f5b88-utilities" (OuterVolumeSpecName: "utilities") pod "1079a291-b9f2-4f78-b720-46f0893f5b88" (UID: "1079a291-b9f2-4f78-b720-46f0893f5b88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:37:16 crc kubenswrapper[4813]: I1125 10:37:16.509533 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1079a291-b9f2-4f78-b720-46f0893f5b88-kube-api-access-khkg4" (OuterVolumeSpecName: "kube-api-access-khkg4") pod "1079a291-b9f2-4f78-b720-46f0893f5b88" (UID: "1079a291-b9f2-4f78-b720-46f0893f5b88"). InnerVolumeSpecName "kube-api-access-khkg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:37:16 crc kubenswrapper[4813]: I1125 10:37:16.603893 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1079a291-b9f2-4f78-b720-46f0893f5b88-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:16 crc kubenswrapper[4813]: I1125 10:37:16.603937 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khkg4\" (UniqueName: \"kubernetes.io/projected/1079a291-b9f2-4f78-b720-46f0893f5b88-kube-api-access-khkg4\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:16 crc kubenswrapper[4813]: I1125 10:37:16.664226 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1079a291-b9f2-4f78-b720-46f0893f5b88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1079a291-b9f2-4f78-b720-46f0893f5b88" (UID: "1079a291-b9f2-4f78-b720-46f0893f5b88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:37:16 crc kubenswrapper[4813]: I1125 10:37:16.705142 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1079a291-b9f2-4f78-b720-46f0893f5b88-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.133405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhgxx" event={"ID":"a5deac33-30de-491e-94ff-53fe67de0eb8","Type":"ContainerStarted","Data":"34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a"} Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.135529 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-km2jk" event={"ID":"e31e8556-76b9-48db-a630-4a990e4a432a","Type":"ContainerStarted","Data":"56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff"} Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.138452 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trh25" event={"ID":"9502b5b8-d91d-43fa-9498-4daeb00dd6ba","Type":"ContainerStarted","Data":"77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246"} Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.142491 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mv2q6" event={"ID":"4c9a79a8-32f8-4018-b6e7-a76164389632","Type":"ContainerStarted","Data":"6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba"} Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.144627 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6vc6" event={"ID":"1079a291-b9f2-4f78-b720-46f0893f5b88","Type":"ContainerDied","Data":"2653be864821a06a36762a981b8802f0532a2c2e7242ec617b99cb38897ff6c7"} Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.144672 4813 scope.go:117] "RemoveContainer" containerID="2d0f9ba98f4ab197c0e565a865b846d19e9657dbc81a2a7070df0dc5b104fa9d" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.144726 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6vc6" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.160897 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rhgxx" podStartSLOduration=4.588064109 podStartE2EDuration="3m0.160871126s" podCreationTimestamp="2025-11-25 10:34:17 +0000 UTC" firstStartedPulling="2025-11-25 10:34:19.837908625 +0000 UTC m=+156.967618511" lastFinishedPulling="2025-11-25 10:37:15.410715642 +0000 UTC m=+332.540425528" observedRunningTime="2025-11-25 10:37:17.158845525 +0000 UTC m=+334.288555431" watchObservedRunningTime="2025-11-25 10:37:17.160871126 +0000 UTC m=+334.290581012" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.173384 4813 scope.go:117] "RemoveContainer" containerID="40f80ef0a9d8bb303750eee465fd1d14f5336fe73089ffdf03d605a1cba3e3d3" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.188158 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-trh25" podStartSLOduration=15.116830628 podStartE2EDuration="3m0.188139004s" podCreationTimestamp="2025-11-25 10:34:17 +0000 UTC" firstStartedPulling="2025-11-25 10:34:19.898862675 +0000 UTC m=+157.028572561" lastFinishedPulling="2025-11-25 10:37:04.970171051 +0000 UTC m=+322.099880937" observedRunningTime="2025-11-25 10:37:17.184598216 +0000 UTC m=+334.314308122" watchObservedRunningTime="2025-11-25 10:37:17.188139004 +0000 UTC m=+334.317848900" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.195842 4813 scope.go:117] "RemoveContainer" containerID="c6868867638f5d4955847dba8cef1b80a30ef60a60835be232ed8293b5ce8e80" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.207750 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-km2jk" podStartSLOduration=3.998830979 podStartE2EDuration="3m0.207730079s" podCreationTimestamp="2025-11-25 10:34:17 +0000 UTC" firstStartedPulling="2025-11-25 10:34:19.869824069 +0000 UTC m=+156.999533955" lastFinishedPulling="2025-11-25 10:37:16.078723169 +0000 UTC m=+333.208433055" observedRunningTime="2025-11-25 10:37:17.205892853 +0000 UTC m=+334.335602759" watchObservedRunningTime="2025-11-25 10:37:17.207730079 +0000 UTC m=+334.337439965" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.234086 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mv2q6" podStartSLOduration=3.665634575 podStartE2EDuration="3m0.234068208s" podCreationTimestamp="2025-11-25 10:34:17 +0000 UTC" firstStartedPulling="2025-11-25 10:34:19.829503155 +0000 UTC m=+156.959213041" lastFinishedPulling="2025-11-25 10:37:16.397936788 +0000 UTC m=+333.527646674" observedRunningTime="2025-11-25 10:37:17.232434299 +0000 UTC m=+334.362144205" watchObservedRunningTime="2025-11-25 10:37:17.234068208 +0000 UTC m=+334.363778094" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.257073 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q6vc6"] Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.269815 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q6vc6"] Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.627859 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1079a291-b9f2-4f78-b720-46f0893f5b88" path="/var/lib/kubelet/pods/1079a291-b9f2-4f78-b720-46f0893f5b88/volumes" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.633158 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.633287 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.847012 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:37:17 crc kubenswrapper[4813]: I1125 10:37:17.847107 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:37:18 crc kubenswrapper[4813]: I1125 10:37:18.042873 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:37:18 crc kubenswrapper[4813]: I1125 10:37:18.043328 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:37:18 crc kubenswrapper[4813]: I1125 10:37:18.235813 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:37:18 crc kubenswrapper[4813]: I1125 10:37:18.236036 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:37:18 crc kubenswrapper[4813]: I1125 10:37:18.676450 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rhgxx" podUID="a5deac33-30de-491e-94ff-53fe67de0eb8" containerName="registry-server" probeResult="failure" output=< Nov 25 10:37:18 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Nov 25 10:37:18 crc kubenswrapper[4813]: > Nov 25 10:37:18 crc kubenswrapper[4813]: I1125 10:37:18.900985 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mv2q6" podUID="4c9a79a8-32f8-4018-b6e7-a76164389632" containerName="registry-server" probeResult="failure" output=< Nov 25 10:37:18 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Nov 25 10:37:18 crc kubenswrapper[4813]: > Nov 25 10:37:19 crc kubenswrapper[4813]: I1125 10:37:19.089888 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-trh25" podUID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" containerName="registry-server" probeResult="failure" output=< Nov 25 10:37:19 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Nov 25 10:37:19 crc kubenswrapper[4813]: > Nov 25 10:37:19 crc kubenswrapper[4813]: I1125 10:37:19.269458 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-km2jk" podUID="e31e8556-76b9-48db-a630-4a990e4a432a" containerName="registry-server" probeResult="failure" output=< Nov 25 10:37:19 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Nov 25 10:37:19 crc kubenswrapper[4813]: > Nov 25 10:37:26 crc kubenswrapper[4813]: I1125 10:37:26.374960 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n6d5q"] Nov 25 10:37:27 crc kubenswrapper[4813]: I1125 10:37:27.686549 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:37:27 crc kubenswrapper[4813]: I1125 10:37:27.740959 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:37:27 crc kubenswrapper[4813]: I1125 10:37:27.887623 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:37:27 crc kubenswrapper[4813]: I1125 10:37:27.929569 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:37:28 crc kubenswrapper[4813]: I1125 10:37:28.083824 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:37:28 crc kubenswrapper[4813]: I1125 10:37:28.125634 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:37:28 crc kubenswrapper[4813]: I1125 10:37:28.281952 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:37:28 crc kubenswrapper[4813]: I1125 10:37:28.325792 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:37:29 crc kubenswrapper[4813]: I1125 10:37:29.718635 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-km2jk"] Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.218439 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-km2jk" podUID="e31e8556-76b9-48db-a630-4a990e4a432a" containerName="registry-server" containerID="cri-o://56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff" gracePeriod=2 Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.315300 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trh25"] Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.315559 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-trh25" podUID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" containerName="registry-server" containerID="cri-o://77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246" gracePeriod=2 Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.535569 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.628225 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e31e8556-76b9-48db-a630-4a990e4a432a-catalog-content\") pod \"e31e8556-76b9-48db-a630-4a990e4a432a\" (UID: \"e31e8556-76b9-48db-a630-4a990e4a432a\") " Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.628369 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e31e8556-76b9-48db-a630-4a990e4a432a-utilities\") pod \"e31e8556-76b9-48db-a630-4a990e4a432a\" (UID: \"e31e8556-76b9-48db-a630-4a990e4a432a\") " Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.628431 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6clhd\" (UniqueName: \"kubernetes.io/projected/e31e8556-76b9-48db-a630-4a990e4a432a-kube-api-access-6clhd\") pod \"e31e8556-76b9-48db-a630-4a990e4a432a\" (UID: \"e31e8556-76b9-48db-a630-4a990e4a432a\") " Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.629271 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e31e8556-76b9-48db-a630-4a990e4a432a-utilities" (OuterVolumeSpecName: "utilities") pod "e31e8556-76b9-48db-a630-4a990e4a432a" (UID: "e31e8556-76b9-48db-a630-4a990e4a432a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.638312 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e31e8556-76b9-48db-a630-4a990e4a432a-kube-api-access-6clhd" (OuterVolumeSpecName: "kube-api-access-6clhd") pod "e31e8556-76b9-48db-a630-4a990e4a432a" (UID: "e31e8556-76b9-48db-a630-4a990e4a432a"). InnerVolumeSpecName "kube-api-access-6clhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.658811 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.705156 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e31e8556-76b9-48db-a630-4a990e4a432a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e31e8556-76b9-48db-a630-4a990e4a432a" (UID: "e31e8556-76b9-48db-a630-4a990e4a432a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.730474 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-utilities\") pod \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\" (UID: \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\") " Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.732810 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-utilities" (OuterVolumeSpecName: "utilities") pod "9502b5b8-d91d-43fa-9498-4daeb00dd6ba" (UID: "9502b5b8-d91d-43fa-9498-4daeb00dd6ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.732989 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6xgg\" (UniqueName: \"kubernetes.io/projected/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-kube-api-access-g6xgg\") pod \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\" (UID: \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\") " Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.733585 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-catalog-content\") pod \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\" (UID: \"9502b5b8-d91d-43fa-9498-4daeb00dd6ba\") " Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.734151 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.734178 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e31e8556-76b9-48db-a630-4a990e4a432a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.734194 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6clhd\" (UniqueName: \"kubernetes.io/projected/e31e8556-76b9-48db-a630-4a990e4a432a-kube-api-access-6clhd\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.734207 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e31e8556-76b9-48db-a630-4a990e4a432a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.736001 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-kube-api-access-g6xgg" (OuterVolumeSpecName: "kube-api-access-g6xgg") pod "9502b5b8-d91d-43fa-9498-4daeb00dd6ba" (UID: "9502b5b8-d91d-43fa-9498-4daeb00dd6ba"). InnerVolumeSpecName "kube-api-access-g6xgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.785782 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9502b5b8-d91d-43fa-9498-4daeb00dd6ba" (UID: "9502b5b8-d91d-43fa-9498-4daeb00dd6ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.835238 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:30 crc kubenswrapper[4813]: I1125 10:37:30.835272 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6xgg\" (UniqueName: \"kubernetes.io/projected/9502b5b8-d91d-43fa-9498-4daeb00dd6ba-kube-api-access-g6xgg\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.226071 4813 generic.go:334] "Generic (PLEG): container finished" podID="e31e8556-76b9-48db-a630-4a990e4a432a" containerID="56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff" exitCode=0 Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.226164 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-km2jk" event={"ID":"e31e8556-76b9-48db-a630-4a990e4a432a","Type":"ContainerDied","Data":"56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff"} Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.226659 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-km2jk" event={"ID":"e31e8556-76b9-48db-a630-4a990e4a432a","Type":"ContainerDied","Data":"bf0ab9ecf99d7b0ba421c9af475cced24e556caa96abca886f789bdd0dc9aa4c"} Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.226709 4813 scope.go:117] "RemoveContainer" containerID="56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.226171 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-km2jk" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.228642 4813 generic.go:334] "Generic (PLEG): container finished" podID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" containerID="77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246" exitCode=0 Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.228662 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trh25" event={"ID":"9502b5b8-d91d-43fa-9498-4daeb00dd6ba","Type":"ContainerDied","Data":"77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246"} Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.228753 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trh25" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.238141 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trh25" event={"ID":"9502b5b8-d91d-43fa-9498-4daeb00dd6ba","Type":"ContainerDied","Data":"d6ef8536a459567745d7557971762469e80782e0f56ec43cb5f73ab9d546ccbd"} Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.243445 4813 scope.go:117] "RemoveContainer" containerID="00d8db0d84ea9432bfc419179d3f456f7b92b6e64c4dff1b2994bd27066bb69c" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.258535 4813 scope.go:117] "RemoveContainer" containerID="6203c8a440a2eedb41013559e4f5a7cd9d2be0cb4b9b1a324b71f85fb0fdb3f7" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.264593 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-km2jk"] Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.277109 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-km2jk"] Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.278946 4813 scope.go:117] "RemoveContainer" containerID="56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff" Nov 25 10:37:31 crc kubenswrapper[4813]: E1125 10:37:31.279307 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff\": container with ID starting with 56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff not found: ID does not exist" containerID="56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.279341 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff"} err="failed to get container status \"56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff\": rpc error: code = NotFound desc = could not find container \"56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff\": container with ID starting with 56c704f0c96fcd542200feb246ca49a2055f1e0cd71b22418070d2a522bacfff not found: ID does not exist" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.279363 4813 scope.go:117] "RemoveContainer" containerID="00d8db0d84ea9432bfc419179d3f456f7b92b6e64c4dff1b2994bd27066bb69c" Nov 25 10:37:31 crc kubenswrapper[4813]: E1125 10:37:31.279628 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d8db0d84ea9432bfc419179d3f456f7b92b6e64c4dff1b2994bd27066bb69c\": container with ID starting with 00d8db0d84ea9432bfc419179d3f456f7b92b6e64c4dff1b2994bd27066bb69c not found: ID does not exist" containerID="00d8db0d84ea9432bfc419179d3f456f7b92b6e64c4dff1b2994bd27066bb69c" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.279651 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d8db0d84ea9432bfc419179d3f456f7b92b6e64c4dff1b2994bd27066bb69c"} err="failed to get container status \"00d8db0d84ea9432bfc419179d3f456f7b92b6e64c4dff1b2994bd27066bb69c\": rpc error: code = NotFound desc = could not find container \"00d8db0d84ea9432bfc419179d3f456f7b92b6e64c4dff1b2994bd27066bb69c\": container with ID starting with 00d8db0d84ea9432bfc419179d3f456f7b92b6e64c4dff1b2994bd27066bb69c not found: ID does not exist" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.279664 4813 scope.go:117] "RemoveContainer" containerID="6203c8a440a2eedb41013559e4f5a7cd9d2be0cb4b9b1a324b71f85fb0fdb3f7" Nov 25 10:37:31 crc kubenswrapper[4813]: E1125 10:37:31.279988 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6203c8a440a2eedb41013559e4f5a7cd9d2be0cb4b9b1a324b71f85fb0fdb3f7\": container with ID starting with 6203c8a440a2eedb41013559e4f5a7cd9d2be0cb4b9b1a324b71f85fb0fdb3f7 not found: ID does not exist" containerID="6203c8a440a2eedb41013559e4f5a7cd9d2be0cb4b9b1a324b71f85fb0fdb3f7" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.280013 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6203c8a440a2eedb41013559e4f5a7cd9d2be0cb4b9b1a324b71f85fb0fdb3f7"} err="failed to get container status \"6203c8a440a2eedb41013559e4f5a7cd9d2be0cb4b9b1a324b71f85fb0fdb3f7\": rpc error: code = NotFound desc = could not find container \"6203c8a440a2eedb41013559e4f5a7cd9d2be0cb4b9b1a324b71f85fb0fdb3f7\": container with ID starting with 6203c8a440a2eedb41013559e4f5a7cd9d2be0cb4b9b1a324b71f85fb0fdb3f7 not found: ID does not exist" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.280030 4813 scope.go:117] "RemoveContainer" containerID="77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.281712 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trh25"] Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.285517 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-trh25"] Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.293669 4813 scope.go:117] "RemoveContainer" containerID="c53ea92f8fb186235636b1ec4cef531b9d6ba4d5e581c6a26bffb7fb6be2084b" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.310033 4813 scope.go:117] "RemoveContainer" containerID="639870e2a2318789b1dea93a5bb8c37f47db6126bb4b8fbea876bf49f560d17a" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.324983 4813 scope.go:117] "RemoveContainer" containerID="77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246" Nov 25 10:37:31 crc kubenswrapper[4813]: E1125 10:37:31.325736 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246\": container with ID starting with 77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246 not found: ID does not exist" containerID="77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.325783 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246"} err="failed to get container status \"77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246\": rpc error: code = NotFound desc = could not find container \"77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246\": container with ID starting with 77ab10cf72d7430c4fb9a45e5a775991c45f38de1f2c18d4a5bba8583b220246 not found: ID does not exist" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.325815 4813 scope.go:117] "RemoveContainer" containerID="c53ea92f8fb186235636b1ec4cef531b9d6ba4d5e581c6a26bffb7fb6be2084b" Nov 25 10:37:31 crc kubenswrapper[4813]: E1125 10:37:31.326339 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c53ea92f8fb186235636b1ec4cef531b9d6ba4d5e581c6a26bffb7fb6be2084b\": container with ID starting with c53ea92f8fb186235636b1ec4cef531b9d6ba4d5e581c6a26bffb7fb6be2084b not found: ID does not exist" containerID="c53ea92f8fb186235636b1ec4cef531b9d6ba4d5e581c6a26bffb7fb6be2084b" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.326379 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c53ea92f8fb186235636b1ec4cef531b9d6ba4d5e581c6a26bffb7fb6be2084b"} err="failed to get container status \"c53ea92f8fb186235636b1ec4cef531b9d6ba4d5e581c6a26bffb7fb6be2084b\": rpc error: code = NotFound desc = could not find container \"c53ea92f8fb186235636b1ec4cef531b9d6ba4d5e581c6a26bffb7fb6be2084b\": container with ID starting with c53ea92f8fb186235636b1ec4cef531b9d6ba4d5e581c6a26bffb7fb6be2084b not found: ID does not exist" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.326409 4813 scope.go:117] "RemoveContainer" containerID="639870e2a2318789b1dea93a5bb8c37f47db6126bb4b8fbea876bf49f560d17a" Nov 25 10:37:31 crc kubenswrapper[4813]: E1125 10:37:31.326657 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"639870e2a2318789b1dea93a5bb8c37f47db6126bb4b8fbea876bf49f560d17a\": container with ID starting with 639870e2a2318789b1dea93a5bb8c37f47db6126bb4b8fbea876bf49f560d17a not found: ID does not exist" containerID="639870e2a2318789b1dea93a5bb8c37f47db6126bb4b8fbea876bf49f560d17a" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.326713 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"639870e2a2318789b1dea93a5bb8c37f47db6126bb4b8fbea876bf49f560d17a"} err="failed to get container status \"639870e2a2318789b1dea93a5bb8c37f47db6126bb4b8fbea876bf49f560d17a\": rpc error: code = NotFound desc = could not find container \"639870e2a2318789b1dea93a5bb8c37f47db6126bb4b8fbea876bf49f560d17a\": container with ID starting with 639870e2a2318789b1dea93a5bb8c37f47db6126bb4b8fbea876bf49f560d17a not found: ID does not exist" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.628189 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" path="/var/lib/kubelet/pods/9502b5b8-d91d-43fa-9498-4daeb00dd6ba/volumes" Nov 25 10:37:31 crc kubenswrapper[4813]: I1125 10:37:31.628840 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e31e8556-76b9-48db-a630-4a990e4a432a" path="/var/lib/kubelet/pods/e31e8556-76b9-48db-a630-4a990e4a432a/volumes" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.414501 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" podUID="f94406f9-8434-44b5-b86c-15a9d11c4245" containerName="oauth-openshift" containerID="cri-o://acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228" gracePeriod=15 Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.777875 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815291 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5b7945bc75-gk2sw"] Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815568 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" containerName="registry-server" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815589 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" containerName="registry-server" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815605 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" containerName="extract-utilities" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815614 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" containerName="extract-utilities" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815624 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d07673-818e-4645-9360-fa2300714f4c" containerName="extract-content" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815631 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d07673-818e-4645-9360-fa2300714f4c" containerName="extract-content" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815645 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e31e8556-76b9-48db-a630-4a990e4a432a" containerName="extract-utilities" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815653 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e31e8556-76b9-48db-a630-4a990e4a432a" containerName="extract-utilities" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815663 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e31e8556-76b9-48db-a630-4a990e4a432a" containerName="extract-content" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815671 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e31e8556-76b9-48db-a630-4a990e4a432a" containerName="extract-content" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815731 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1079a291-b9f2-4f78-b720-46f0893f5b88" containerName="registry-server" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815740 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1079a291-b9f2-4f78-b720-46f0893f5b88" containerName="registry-server" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815752 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e31e8556-76b9-48db-a630-4a990e4a432a" containerName="registry-server" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815760 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e31e8556-76b9-48db-a630-4a990e4a432a" containerName="registry-server" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815772 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" containerName="extract-content" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815780 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" containerName="extract-content" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815790 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d07673-818e-4645-9360-fa2300714f4c" containerName="registry-server" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815800 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d07673-818e-4645-9360-fa2300714f4c" containerName="registry-server" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815810 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d07673-818e-4645-9360-fa2300714f4c" containerName="extract-utilities" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815818 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d07673-818e-4645-9360-fa2300714f4c" containerName="extract-utilities" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815832 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f94406f9-8434-44b5-b86c-15a9d11c4245" containerName="oauth-openshift" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815840 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f94406f9-8434-44b5-b86c-15a9d11c4245" containerName="oauth-openshift" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815852 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff51e6c-cd70-4242-8e66-0bcc9b3e7388" containerName="pruner" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815860 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff51e6c-cd70-4242-8e66-0bcc9b3e7388" containerName="pruner" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815869 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1079a291-b9f2-4f78-b720-46f0893f5b88" containerName="extract-content" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815877 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1079a291-b9f2-4f78-b720-46f0893f5b88" containerName="extract-content" Nov 25 10:37:51 crc kubenswrapper[4813]: E1125 10:37:51.815887 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1079a291-b9f2-4f78-b720-46f0893f5b88" containerName="extract-utilities" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.815895 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1079a291-b9f2-4f78-b720-46f0893f5b88" containerName="extract-utilities" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.816125 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e31e8556-76b9-48db-a630-4a990e4a432a" containerName="registry-server" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.816144 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff51e6c-cd70-4242-8e66-0bcc9b3e7388" containerName="pruner" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.816155 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1079a291-b9f2-4f78-b720-46f0893f5b88" containerName="registry-server" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.816169 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f94406f9-8434-44b5-b86c-15a9d11c4245" containerName="oauth-openshift" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.816180 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d07673-818e-4645-9360-fa2300714f4c" containerName="registry-server" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.816191 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9502b5b8-d91d-43fa-9498-4daeb00dd6ba" containerName="registry-server" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.816654 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.827030 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b7945bc75-gk2sw"] Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.913743 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2t5q\" (UniqueName: \"kubernetes.io/projected/f94406f9-8434-44b5-b86c-15a9d11c4245-kube-api-access-s2t5q\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914053 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-serving-cert\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914144 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-ocp-branding-template\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914237 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-session\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914335 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-login\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914434 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f94406f9-8434-44b5-b86c-15a9d11c4245-audit-dir\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914541 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f94406f9-8434-44b5-b86c-15a9d11c4245-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914638 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-error\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914760 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-service-ca\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914800 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-audit-policies\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914824 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-router-certs\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914852 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-provider-selection\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914934 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-cliconfig\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914957 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-idp-0-file-data\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.914985 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-trusted-ca-bundle\") pod \"f94406f9-8434-44b5-b86c-15a9d11c4245\" (UID: \"f94406f9-8434-44b5-b86c-15a9d11c4245\") " Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915075 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915107 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915133 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915156 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1badc1ca-27d9-47ce-ad68-5e11492481e1-audit-dir\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915174 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-user-template-login\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915199 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-user-template-error\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915320 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-session\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915653 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915739 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915809 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915855 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1badc1ca-27d9-47ce-ad68-5e11492481e1-audit-policies\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915924 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915950 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmzs9\" (UniqueName: \"kubernetes.io/projected/1badc1ca-27d9-47ce-ad68-5e11492481e1-kube-api-access-dmzs9\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.915985 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.916008 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.916056 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.916076 4813 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f94406f9-8434-44b5-b86c-15a9d11c4245-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.916438 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.917219 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.917518 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.921081 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f94406f9-8434-44b5-b86c-15a9d11c4245-kube-api-access-s2t5q" (OuterVolumeSpecName: "kube-api-access-s2t5q") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "kube-api-access-s2t5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.921096 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.924911 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.925407 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.926690 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.927053 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.927392 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.927580 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:37:51 crc kubenswrapper[4813]: I1125 10:37:51.929927 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f94406f9-8434-44b5-b86c-15a9d11c4245" (UID: "f94406f9-8434-44b5-b86c-15a9d11c4245"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017103 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017177 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1badc1ca-27d9-47ce-ad68-5e11492481e1-audit-dir\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017197 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-user-template-login\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017217 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-user-template-error\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017249 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017274 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-session\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017297 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017320 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1badc1ca-27d9-47ce-ad68-5e11492481e1-audit-policies\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017348 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017369 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmzs9\" (UniqueName: \"kubernetes.io/projected/1badc1ca-27d9-47ce-ad68-5e11492481e1-kube-api-access-dmzs9\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017395 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017418 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017454 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017481 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017525 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2t5q\" (UniqueName: \"kubernetes.io/projected/f94406f9-8434-44b5-b86c-15a9d11c4245-kube-api-access-s2t5q\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017538 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017550 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017563 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017578 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017592 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017604 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017722 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017736 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017750 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017762 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.017801 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94406f9-8434-44b5-b86c-15a9d11c4245-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.018472 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.018526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1badc1ca-27d9-47ce-ad68-5e11492481e1-audit-dir\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.018994 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1badc1ca-27d9-47ce-ad68-5e11492481e1-audit-policies\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.019410 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.020748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.020859 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.023978 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-user-template-error\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.023989 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.024095 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-session\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.024215 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.024319 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.024367 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-user-template-login\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.024663 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1badc1ca-27d9-47ce-ad68-5e11492481e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.033649 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmzs9\" (UniqueName: \"kubernetes.io/projected/1badc1ca-27d9-47ce-ad68-5e11492481e1-kube-api-access-dmzs9\") pod \"oauth-openshift-5b7945bc75-gk2sw\" (UID: \"1badc1ca-27d9-47ce-ad68-5e11492481e1\") " pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.138953 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.327821 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b7945bc75-gk2sw"] Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.355335 4813 generic.go:334] "Generic (PLEG): container finished" podID="f94406f9-8434-44b5-b86c-15a9d11c4245" containerID="acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228" exitCode=0 Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.355421 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.355431 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" event={"ID":"f94406f9-8434-44b5-b86c-15a9d11c4245","Type":"ContainerDied","Data":"acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228"} Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.355482 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n6d5q" event={"ID":"f94406f9-8434-44b5-b86c-15a9d11c4245","Type":"ContainerDied","Data":"20361c962a9cbc39ad1589a52553ac97ca787d25ec8423a674499146c8b5b336"} Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.355506 4813 scope.go:117] "RemoveContainer" containerID="acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.357115 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" event={"ID":"1badc1ca-27d9-47ce-ad68-5e11492481e1","Type":"ContainerStarted","Data":"dba03bb724851f778aa3d2bd8a1850b52cd01e763a1461f5de5476ae994cd4da"} Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.388426 4813 scope.go:117] "RemoveContainer" containerID="acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228" Nov 25 10:37:52 crc kubenswrapper[4813]: E1125 10:37:52.390014 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228\": container with ID starting with acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228 not found: ID does not exist" containerID="acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.390088 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228"} err="failed to get container status \"acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228\": rpc error: code = NotFound desc = could not find container \"acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228\": container with ID starting with acc465b1f8d2b03f082d9ebfc7bb5a1a96b2a38b4f83592491f0348f2ae84228 not found: ID does not exist" Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.400904 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n6d5q"] Nov 25 10:37:52 crc kubenswrapper[4813]: I1125 10:37:52.403963 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n6d5q"] Nov 25 10:37:53 crc kubenswrapper[4813]: I1125 10:37:53.362708 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" event={"ID":"1badc1ca-27d9-47ce-ad68-5e11492481e1","Type":"ContainerStarted","Data":"3de3ea62e90bd9a4c0aff78b50a80055012a7169f708f76e40dcb1e112e3c86c"} Nov 25 10:37:53 crc kubenswrapper[4813]: I1125 10:37:53.362951 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:53 crc kubenswrapper[4813]: I1125 10:37:53.372953 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" Nov 25 10:37:53 crc kubenswrapper[4813]: I1125 10:37:53.389059 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5b7945bc75-gk2sw" podStartSLOduration=27.389028731 podStartE2EDuration="27.389028731s" podCreationTimestamp="2025-11-25 10:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:37:53.385849975 +0000 UTC m=+370.515559871" watchObservedRunningTime="2025-11-25 10:37:53.389028731 +0000 UTC m=+370.518738617" Nov 25 10:37:53 crc kubenswrapper[4813]: I1125 10:37:53.631163 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f94406f9-8434-44b5-b86c-15a9d11c4245" path="/var/lib/kubelet/pods/f94406f9-8434-44b5-b86c-15a9d11c4245/volumes" Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.482547 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhgxx"] Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.483410 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rhgxx" podUID="a5deac33-30de-491e-94ff-53fe67de0eb8" containerName="registry-server" containerID="cri-o://34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a" gracePeriod=30 Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.491844 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mv2q6"] Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.492101 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mv2q6" podUID="4c9a79a8-32f8-4018-b6e7-a76164389632" containerName="registry-server" containerID="cri-o://6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba" gracePeriod=30 Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.505759 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7s8tp"] Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.505998 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" podUID="302f6a62-c67c-48ef-97bc-9b53cdf5f67e" containerName="marketplace-operator" containerID="cri-o://ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95" gracePeriod=30 Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.515570 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4dl7"] Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.515841 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k4dl7" podUID="070d675e-0557-4a9e-9a9a-1a5019547e2a" containerName="registry-server" containerID="cri-o://f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a" gracePeriod=30 Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.530703 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-skdbx"] Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.531509 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.534650 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hw9jq"] Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.535058 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hw9jq" podUID="40aa702b-1c32-45ad-ba16-ae04f8da0675" containerName="registry-server" containerID="cri-o://96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04" gracePeriod=30 Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.545806 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-skdbx"] Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.616708 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18df3708-b841-4af2-acb4-de42ed8ec241-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-skdbx\" (UID: \"18df3708-b841-4af2-acb4-de42ed8ec241\") " pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.616825 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86dgz\" (UniqueName: \"kubernetes.io/projected/18df3708-b841-4af2-acb4-de42ed8ec241-kube-api-access-86dgz\") pod \"marketplace-operator-79b997595-skdbx\" (UID: \"18df3708-b841-4af2-acb4-de42ed8ec241\") " pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.616877 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18df3708-b841-4af2-acb4-de42ed8ec241-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-skdbx\" (UID: \"18df3708-b841-4af2-acb4-de42ed8ec241\") " pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.718002 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86dgz\" (UniqueName: \"kubernetes.io/projected/18df3708-b841-4af2-acb4-de42ed8ec241-kube-api-access-86dgz\") pod \"marketplace-operator-79b997595-skdbx\" (UID: \"18df3708-b841-4af2-acb4-de42ed8ec241\") " pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.718134 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18df3708-b841-4af2-acb4-de42ed8ec241-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-skdbx\" (UID: \"18df3708-b841-4af2-acb4-de42ed8ec241\") " pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.718207 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18df3708-b841-4af2-acb4-de42ed8ec241-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-skdbx\" (UID: \"18df3708-b841-4af2-acb4-de42ed8ec241\") " pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.720109 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18df3708-b841-4af2-acb4-de42ed8ec241-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-skdbx\" (UID: \"18df3708-b841-4af2-acb4-de42ed8ec241\") " pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.729188 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18df3708-b841-4af2-acb4-de42ed8ec241-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-skdbx\" (UID: \"18df3708-b841-4af2-acb4-de42ed8ec241\") " pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.737168 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86dgz\" (UniqueName: \"kubernetes.io/projected/18df3708-b841-4af2-acb4-de42ed8ec241-kube-api-access-86dgz\") pod \"marketplace-operator-79b997595-skdbx\" (UID: \"18df3708-b841-4af2-acb4-de42ed8ec241\") " pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.855121 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:13 crc kubenswrapper[4813]: I1125 10:38:13.934001 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.024893 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070d675e-0557-4a9e-9a9a-1a5019547e2a-utilities\") pod \"070d675e-0557-4a9e-9a9a-1a5019547e2a\" (UID: \"070d675e-0557-4a9e-9a9a-1a5019547e2a\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.024941 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4xgv\" (UniqueName: \"kubernetes.io/projected/070d675e-0557-4a9e-9a9a-1a5019547e2a-kube-api-access-d4xgv\") pod \"070d675e-0557-4a9e-9a9a-1a5019547e2a\" (UID: \"070d675e-0557-4a9e-9a9a-1a5019547e2a\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.025023 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070d675e-0557-4a9e-9a9a-1a5019547e2a-catalog-content\") pod \"070d675e-0557-4a9e-9a9a-1a5019547e2a\" (UID: \"070d675e-0557-4a9e-9a9a-1a5019547e2a\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.031135 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070d675e-0557-4a9e-9a9a-1a5019547e2a-kube-api-access-d4xgv" (OuterVolumeSpecName: "kube-api-access-d4xgv") pod "070d675e-0557-4a9e-9a9a-1a5019547e2a" (UID: "070d675e-0557-4a9e-9a9a-1a5019547e2a"). InnerVolumeSpecName "kube-api-access-d4xgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.031741 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070d675e-0557-4a9e-9a9a-1a5019547e2a-utilities" (OuterVolumeSpecName: "utilities") pod "070d675e-0557-4a9e-9a9a-1a5019547e2a" (UID: "070d675e-0557-4a9e-9a9a-1a5019547e2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.053894 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070d675e-0557-4a9e-9a9a-1a5019547e2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "070d675e-0557-4a9e-9a9a-1a5019547e2a" (UID: "070d675e-0557-4a9e-9a9a-1a5019547e2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.062561 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.068966 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.080737 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.082312 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.111926 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-skdbx"] Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.129569 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4xgv\" (UniqueName: \"kubernetes.io/projected/070d675e-0557-4a9e-9a9a-1a5019547e2a-kube-api-access-d4xgv\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.129695 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070d675e-0557-4a9e-9a9a-1a5019547e2a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.129745 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070d675e-0557-4a9e-9a9a-1a5019547e2a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.230214 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c9a79a8-32f8-4018-b6e7-a76164389632-utilities\") pod \"4c9a79a8-32f8-4018-b6e7-a76164389632\" (UID: \"4c9a79a8-32f8-4018-b6e7-a76164389632\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.231651 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c9a79a8-32f8-4018-b6e7-a76164389632-utilities" (OuterVolumeSpecName: "utilities") pod "4c9a79a8-32f8-4018-b6e7-a76164389632" (UID: "4c9a79a8-32f8-4018-b6e7-a76164389632"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.231993 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40aa702b-1c32-45ad-ba16-ae04f8da0675-catalog-content\") pod \"40aa702b-1c32-45ad-ba16-ae04f8da0675\" (UID: \"40aa702b-1c32-45ad-ba16-ae04f8da0675\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.232071 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-marketplace-trusted-ca\") pod \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\" (UID: \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.232134 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5deac33-30de-491e-94ff-53fe67de0eb8-utilities\") pod \"a5deac33-30de-491e-94ff-53fe67de0eb8\" (UID: \"a5deac33-30de-491e-94ff-53fe67de0eb8\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.232168 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5deac33-30de-491e-94ff-53fe67de0eb8-catalog-content\") pod \"a5deac33-30de-491e-94ff-53fe67de0eb8\" (UID: \"a5deac33-30de-491e-94ff-53fe67de0eb8\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.232201 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2vzd\" (UniqueName: \"kubernetes.io/projected/a5deac33-30de-491e-94ff-53fe67de0eb8-kube-api-access-s2vzd\") pod \"a5deac33-30de-491e-94ff-53fe67de0eb8\" (UID: \"a5deac33-30de-491e-94ff-53fe67de0eb8\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.232269 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-marketplace-operator-metrics\") pod \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\" (UID: \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.232298 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40aa702b-1c32-45ad-ba16-ae04f8da0675-utilities\") pod \"40aa702b-1c32-45ad-ba16-ae04f8da0675\" (UID: \"40aa702b-1c32-45ad-ba16-ae04f8da0675\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.232355 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcnh5\" (UniqueName: \"kubernetes.io/projected/4c9a79a8-32f8-4018-b6e7-a76164389632-kube-api-access-lcnh5\") pod \"4c9a79a8-32f8-4018-b6e7-a76164389632\" (UID: \"4c9a79a8-32f8-4018-b6e7-a76164389632\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.232386 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn6jg\" (UniqueName: \"kubernetes.io/projected/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-kube-api-access-bn6jg\") pod \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\" (UID: \"302f6a62-c67c-48ef-97bc-9b53cdf5f67e\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.232421 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfm2n\" (UniqueName: \"kubernetes.io/projected/40aa702b-1c32-45ad-ba16-ae04f8da0675-kube-api-access-kfm2n\") pod \"40aa702b-1c32-45ad-ba16-ae04f8da0675\" (UID: \"40aa702b-1c32-45ad-ba16-ae04f8da0675\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.232458 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c9a79a8-32f8-4018-b6e7-a76164389632-catalog-content\") pod \"4c9a79a8-32f8-4018-b6e7-a76164389632\" (UID: \"4c9a79a8-32f8-4018-b6e7-a76164389632\") " Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.234225 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40aa702b-1c32-45ad-ba16-ae04f8da0675-utilities" (OuterVolumeSpecName: "utilities") pod "40aa702b-1c32-45ad-ba16-ae04f8da0675" (UID: "40aa702b-1c32-45ad-ba16-ae04f8da0675"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.234396 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "302f6a62-c67c-48ef-97bc-9b53cdf5f67e" (UID: "302f6a62-c67c-48ef-97bc-9b53cdf5f67e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.235456 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5deac33-30de-491e-94ff-53fe67de0eb8-utilities" (OuterVolumeSpecName: "utilities") pod "a5deac33-30de-491e-94ff-53fe67de0eb8" (UID: "a5deac33-30de-491e-94ff-53fe67de0eb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.236188 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40aa702b-1c32-45ad-ba16-ae04f8da0675-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.236219 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c9a79a8-32f8-4018-b6e7-a76164389632-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.236236 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.236251 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5deac33-30de-491e-94ff-53fe67de0eb8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.237277 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c9a79a8-32f8-4018-b6e7-a76164389632-kube-api-access-lcnh5" (OuterVolumeSpecName: "kube-api-access-lcnh5") pod "4c9a79a8-32f8-4018-b6e7-a76164389632" (UID: "4c9a79a8-32f8-4018-b6e7-a76164389632"). InnerVolumeSpecName "kube-api-access-lcnh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.237519 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40aa702b-1c32-45ad-ba16-ae04f8da0675-kube-api-access-kfm2n" (OuterVolumeSpecName: "kube-api-access-kfm2n") pod "40aa702b-1c32-45ad-ba16-ae04f8da0675" (UID: "40aa702b-1c32-45ad-ba16-ae04f8da0675"). InnerVolumeSpecName "kube-api-access-kfm2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.238185 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5deac33-30de-491e-94ff-53fe67de0eb8-kube-api-access-s2vzd" (OuterVolumeSpecName: "kube-api-access-s2vzd") pod "a5deac33-30de-491e-94ff-53fe67de0eb8" (UID: "a5deac33-30de-491e-94ff-53fe67de0eb8"). InnerVolumeSpecName "kube-api-access-s2vzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.238354 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "302f6a62-c67c-48ef-97bc-9b53cdf5f67e" (UID: "302f6a62-c67c-48ef-97bc-9b53cdf5f67e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.242206 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-kube-api-access-bn6jg" (OuterVolumeSpecName: "kube-api-access-bn6jg") pod "302f6a62-c67c-48ef-97bc-9b53cdf5f67e" (UID: "302f6a62-c67c-48ef-97bc-9b53cdf5f67e"). InnerVolumeSpecName "kube-api-access-bn6jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.302745 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5deac33-30de-491e-94ff-53fe67de0eb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5deac33-30de-491e-94ff-53fe67de0eb8" (UID: "a5deac33-30de-491e-94ff-53fe67de0eb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.305358 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c9a79a8-32f8-4018-b6e7-a76164389632-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c9a79a8-32f8-4018-b6e7-a76164389632" (UID: "4c9a79a8-32f8-4018-b6e7-a76164389632"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.338262 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5deac33-30de-491e-94ff-53fe67de0eb8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.338303 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2vzd\" (UniqueName: \"kubernetes.io/projected/a5deac33-30de-491e-94ff-53fe67de0eb8-kube-api-access-s2vzd\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.338320 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.338333 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcnh5\" (UniqueName: \"kubernetes.io/projected/4c9a79a8-32f8-4018-b6e7-a76164389632-kube-api-access-lcnh5\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.338343 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn6jg\" (UniqueName: \"kubernetes.io/projected/302f6a62-c67c-48ef-97bc-9b53cdf5f67e-kube-api-access-bn6jg\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.338355 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfm2n\" (UniqueName: \"kubernetes.io/projected/40aa702b-1c32-45ad-ba16-ae04f8da0675-kube-api-access-kfm2n\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.338366 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c9a79a8-32f8-4018-b6e7-a76164389632-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.347241 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40aa702b-1c32-45ad-ba16-ae04f8da0675-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40aa702b-1c32-45ad-ba16-ae04f8da0675" (UID: "40aa702b-1c32-45ad-ba16-ae04f8da0675"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.439587 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40aa702b-1c32-45ad-ba16-ae04f8da0675-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.486397 4813 generic.go:334] "Generic (PLEG): container finished" podID="40aa702b-1c32-45ad-ba16-ae04f8da0675" containerID="96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04" exitCode=0 Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.486490 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw9jq" event={"ID":"40aa702b-1c32-45ad-ba16-ae04f8da0675","Type":"ContainerDied","Data":"96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04"} Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.486519 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw9jq" event={"ID":"40aa702b-1c32-45ad-ba16-ae04f8da0675","Type":"ContainerDied","Data":"62c7f2ba3c9453acc22ad94a75daed31aee222faf3a5223fcafc4d7e43f26e8f"} Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.486556 4813 scope.go:117] "RemoveContainer" containerID="96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.486718 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hw9jq" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.495425 4813 generic.go:334] "Generic (PLEG): container finished" podID="a5deac33-30de-491e-94ff-53fe67de0eb8" containerID="34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a" exitCode=0 Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.495532 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhgxx" event={"ID":"a5deac33-30de-491e-94ff-53fe67de0eb8","Type":"ContainerDied","Data":"34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a"} Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.495582 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhgxx" event={"ID":"a5deac33-30de-491e-94ff-53fe67de0eb8","Type":"ContainerDied","Data":"7e9c65c6e2a395f93a1f77b4487c6419c72be03dda10173fa706bbc663a7e4dc"} Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.495793 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhgxx" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.500290 4813 generic.go:334] "Generic (PLEG): container finished" podID="070d675e-0557-4a9e-9a9a-1a5019547e2a" containerID="f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a" exitCode=0 Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.500415 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4dl7" event={"ID":"070d675e-0557-4a9e-9a9a-1a5019547e2a","Type":"ContainerDied","Data":"f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a"} Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.500475 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4dl7" event={"ID":"070d675e-0557-4a9e-9a9a-1a5019547e2a","Type":"ContainerDied","Data":"7b90b227bd233c35a032baf51cfd419bfdb84a1cf39fdb58b6b1e85863229d90"} Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.500742 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4dl7" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.502642 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" event={"ID":"18df3708-b841-4af2-acb4-de42ed8ec241","Type":"ContainerStarted","Data":"f22d45a1814d799581c9941b00dbeee249df75cbc53f676e77458bb414f0a642"} Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.506192 4813 generic.go:334] "Generic (PLEG): container finished" podID="302f6a62-c67c-48ef-97bc-9b53cdf5f67e" containerID="ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95" exitCode=0 Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.506298 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" event={"ID":"302f6a62-c67c-48ef-97bc-9b53cdf5f67e","Type":"ContainerDied","Data":"ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95"} Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.506334 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" event={"ID":"302f6a62-c67c-48ef-97bc-9b53cdf5f67e","Type":"ContainerDied","Data":"bb79f31f3c29769689828b72df3bc01da9a448957fdcff837ea94d93edce5bb1"} Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.506379 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7s8tp" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.508865 4813 scope.go:117] "RemoveContainer" containerID="553135b323032b4e718509fc5757e2ffb7b5060550a2533a32ac27f91ff6ecc2" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.512563 4813 generic.go:334] "Generic (PLEG): container finished" podID="4c9a79a8-32f8-4018-b6e7-a76164389632" containerID="6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba" exitCode=0 Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.512934 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mv2q6" event={"ID":"4c9a79a8-32f8-4018-b6e7-a76164389632","Type":"ContainerDied","Data":"6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba"} Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.513198 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mv2q6" event={"ID":"4c9a79a8-32f8-4018-b6e7-a76164389632","Type":"ContainerDied","Data":"4a4e55fc969518c81d136129070b808c05e25d901f01f7e6972289b26155f9cd"} Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.513510 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mv2q6" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.533857 4813 scope.go:117] "RemoveContainer" containerID="b806ff102fd4368644691419dda058689f65b84266b2eaa72921889ec09c1771" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.563802 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hw9jq"] Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.578896 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hw9jq"] Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.579236 4813 scope.go:117] "RemoveContainer" containerID="96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.580784 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04\": container with ID starting with 96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04 not found: ID does not exist" containerID="96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.580918 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04"} err="failed to get container status \"96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04\": rpc error: code = NotFound desc = could not find container \"96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04\": container with ID starting with 96ad241605b1da06e5dc504748015492e5f87b9b872730ba3e4757a5a5a8ae04 not found: ID does not exist" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.580961 4813 scope.go:117] "RemoveContainer" containerID="553135b323032b4e718509fc5757e2ffb7b5060550a2533a32ac27f91ff6ecc2" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.584147 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"553135b323032b4e718509fc5757e2ffb7b5060550a2533a32ac27f91ff6ecc2\": container with ID starting with 553135b323032b4e718509fc5757e2ffb7b5060550a2533a32ac27f91ff6ecc2 not found: ID does not exist" containerID="553135b323032b4e718509fc5757e2ffb7b5060550a2533a32ac27f91ff6ecc2" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.584204 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"553135b323032b4e718509fc5757e2ffb7b5060550a2533a32ac27f91ff6ecc2"} err="failed to get container status \"553135b323032b4e718509fc5757e2ffb7b5060550a2533a32ac27f91ff6ecc2\": rpc error: code = NotFound desc = could not find container \"553135b323032b4e718509fc5757e2ffb7b5060550a2533a32ac27f91ff6ecc2\": container with ID starting with 553135b323032b4e718509fc5757e2ffb7b5060550a2533a32ac27f91ff6ecc2 not found: ID does not exist" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.584246 4813 scope.go:117] "RemoveContainer" containerID="b806ff102fd4368644691419dda058689f65b84266b2eaa72921889ec09c1771" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.586027 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b806ff102fd4368644691419dda058689f65b84266b2eaa72921889ec09c1771\": container with ID starting with b806ff102fd4368644691419dda058689f65b84266b2eaa72921889ec09c1771 not found: ID does not exist" containerID="b806ff102fd4368644691419dda058689f65b84266b2eaa72921889ec09c1771" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.586421 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b806ff102fd4368644691419dda058689f65b84266b2eaa72921889ec09c1771"} err="failed to get container status \"b806ff102fd4368644691419dda058689f65b84266b2eaa72921889ec09c1771\": rpc error: code = NotFound desc = could not find container \"b806ff102fd4368644691419dda058689f65b84266b2eaa72921889ec09c1771\": container with ID starting with b806ff102fd4368644691419dda058689f65b84266b2eaa72921889ec09c1771 not found: ID does not exist" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.586465 4813 scope.go:117] "RemoveContainer" containerID="34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.590798 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4dl7"] Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.597079 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4dl7"] Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.599175 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhgxx"] Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.605375 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rhgxx"] Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.616611 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mv2q6"] Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.618384 4813 scope.go:117] "RemoveContainer" containerID="d7cbf2acf109b4d343c9a24742f2a1c733160bda4bf08a104fc3bb082b999d80" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.620488 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mv2q6"] Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.622781 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7s8tp"] Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.625717 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7s8tp"] Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.655766 4813 scope.go:117] "RemoveContainer" containerID="466a163e5e20cbae5c0494cedce7a37492fcf70ec6a37cc1c30c8234c9d5e153" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.680129 4813 scope.go:117] "RemoveContainer" containerID="34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.680711 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a\": container with ID starting with 34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a not found: ID does not exist" containerID="34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.680771 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a"} err="failed to get container status \"34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a\": rpc error: code = NotFound desc = could not find container \"34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a\": container with ID starting with 34dd1ac82103b0a311e70a5d10c0f651d4123939c34d3d3f67ea13d6034f690a not found: ID does not exist" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.680806 4813 scope.go:117] "RemoveContainer" containerID="d7cbf2acf109b4d343c9a24742f2a1c733160bda4bf08a104fc3bb082b999d80" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.681342 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7cbf2acf109b4d343c9a24742f2a1c733160bda4bf08a104fc3bb082b999d80\": container with ID starting with d7cbf2acf109b4d343c9a24742f2a1c733160bda4bf08a104fc3bb082b999d80 not found: ID does not exist" containerID="d7cbf2acf109b4d343c9a24742f2a1c733160bda4bf08a104fc3bb082b999d80" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.681380 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7cbf2acf109b4d343c9a24742f2a1c733160bda4bf08a104fc3bb082b999d80"} err="failed to get container status \"d7cbf2acf109b4d343c9a24742f2a1c733160bda4bf08a104fc3bb082b999d80\": rpc error: code = NotFound desc = could not find container \"d7cbf2acf109b4d343c9a24742f2a1c733160bda4bf08a104fc3bb082b999d80\": container with ID starting with d7cbf2acf109b4d343c9a24742f2a1c733160bda4bf08a104fc3bb082b999d80 not found: ID does not exist" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.681405 4813 scope.go:117] "RemoveContainer" containerID="466a163e5e20cbae5c0494cedce7a37492fcf70ec6a37cc1c30c8234c9d5e153" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.681841 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466a163e5e20cbae5c0494cedce7a37492fcf70ec6a37cc1c30c8234c9d5e153\": container with ID starting with 466a163e5e20cbae5c0494cedce7a37492fcf70ec6a37cc1c30c8234c9d5e153 not found: ID does not exist" containerID="466a163e5e20cbae5c0494cedce7a37492fcf70ec6a37cc1c30c8234c9d5e153" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.681866 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466a163e5e20cbae5c0494cedce7a37492fcf70ec6a37cc1c30c8234c9d5e153"} err="failed to get container status \"466a163e5e20cbae5c0494cedce7a37492fcf70ec6a37cc1c30c8234c9d5e153\": rpc error: code = NotFound desc = could not find container \"466a163e5e20cbae5c0494cedce7a37492fcf70ec6a37cc1c30c8234c9d5e153\": container with ID starting with 466a163e5e20cbae5c0494cedce7a37492fcf70ec6a37cc1c30c8234c9d5e153 not found: ID does not exist" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.681879 4813 scope.go:117] "RemoveContainer" containerID="f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.702508 4813 scope.go:117] "RemoveContainer" containerID="87ff2b293d8bf7fa43e377e7ac812509b7d082ac5e6f214fc3a8a21b80f3dbfa" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.717151 4813 scope.go:117] "RemoveContainer" containerID="4b4f61792dc8d30f13a72f510de745ee9c8baed57fea9824952db21e563bcd6c" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.732865 4813 scope.go:117] "RemoveContainer" containerID="f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.734169 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a\": container with ID starting with f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a not found: ID does not exist" containerID="f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.734238 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a"} err="failed to get container status \"f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a\": rpc error: code = NotFound desc = could not find container \"f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a\": container with ID starting with f35ad792eb6175878544e83964ec81c3eda257cdcb44abe9d6cc54d071af693a not found: ID does not exist" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.734281 4813 scope.go:117] "RemoveContainer" containerID="87ff2b293d8bf7fa43e377e7ac812509b7d082ac5e6f214fc3a8a21b80f3dbfa" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.735132 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ff2b293d8bf7fa43e377e7ac812509b7d082ac5e6f214fc3a8a21b80f3dbfa\": container with ID starting with 87ff2b293d8bf7fa43e377e7ac812509b7d082ac5e6f214fc3a8a21b80f3dbfa not found: ID does not exist" containerID="87ff2b293d8bf7fa43e377e7ac812509b7d082ac5e6f214fc3a8a21b80f3dbfa" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.735170 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ff2b293d8bf7fa43e377e7ac812509b7d082ac5e6f214fc3a8a21b80f3dbfa"} err="failed to get container status \"87ff2b293d8bf7fa43e377e7ac812509b7d082ac5e6f214fc3a8a21b80f3dbfa\": rpc error: code = NotFound desc = could not find container \"87ff2b293d8bf7fa43e377e7ac812509b7d082ac5e6f214fc3a8a21b80f3dbfa\": container with ID starting with 87ff2b293d8bf7fa43e377e7ac812509b7d082ac5e6f214fc3a8a21b80f3dbfa not found: ID does not exist" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.735203 4813 scope.go:117] "RemoveContainer" containerID="4b4f61792dc8d30f13a72f510de745ee9c8baed57fea9824952db21e563bcd6c" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.735833 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b4f61792dc8d30f13a72f510de745ee9c8baed57fea9824952db21e563bcd6c\": container with ID starting with 4b4f61792dc8d30f13a72f510de745ee9c8baed57fea9824952db21e563bcd6c not found: ID does not exist" containerID="4b4f61792dc8d30f13a72f510de745ee9c8baed57fea9824952db21e563bcd6c" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.735878 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b4f61792dc8d30f13a72f510de745ee9c8baed57fea9824952db21e563bcd6c"} err="failed to get container status \"4b4f61792dc8d30f13a72f510de745ee9c8baed57fea9824952db21e563bcd6c\": rpc error: code = NotFound desc = could not find container \"4b4f61792dc8d30f13a72f510de745ee9c8baed57fea9824952db21e563bcd6c\": container with ID starting with 4b4f61792dc8d30f13a72f510de745ee9c8baed57fea9824952db21e563bcd6c not found: ID does not exist" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.735901 4813 scope.go:117] "RemoveContainer" containerID="ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.749276 4813 scope.go:117] "RemoveContainer" containerID="ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.750057 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95\": container with ID starting with ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95 not found: ID does not exist" containerID="ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.750115 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95"} err="failed to get container status \"ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95\": rpc error: code = NotFound desc = could not find container \"ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95\": container with ID starting with ee6dfb30c998d09b0d57792c9099b33a4a9752a7305d7672a192f3a55e155b95 not found: ID does not exist" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.750152 4813 scope.go:117] "RemoveContainer" containerID="6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.762486 4813 scope.go:117] "RemoveContainer" containerID="008535fd521e688e1160f8f654c4585daa80aa8f99d8e87eccbf4a4ad1d5224f" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.777390 4813 scope.go:117] "RemoveContainer" containerID="a9c9c39c578ccc160e22e6c3d9edc861c483fdb9f98e5ce6e49248bded661e84" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.791722 4813 scope.go:117] "RemoveContainer" containerID="6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.794520 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba\": container with ID starting with 6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba not found: ID does not exist" containerID="6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.794570 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba"} err="failed to get container status \"6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba\": rpc error: code = NotFound desc = could not find container \"6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba\": container with ID starting with 6e06446738c7715377f520f43cabdf8056d40d5b9df55f23345d6cb467ca8bba not found: ID does not exist" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.794605 4813 scope.go:117] "RemoveContainer" containerID="008535fd521e688e1160f8f654c4585daa80aa8f99d8e87eccbf4a4ad1d5224f" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.795034 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"008535fd521e688e1160f8f654c4585daa80aa8f99d8e87eccbf4a4ad1d5224f\": container with ID starting with 008535fd521e688e1160f8f654c4585daa80aa8f99d8e87eccbf4a4ad1d5224f not found: ID does not exist" containerID="008535fd521e688e1160f8f654c4585daa80aa8f99d8e87eccbf4a4ad1d5224f" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.795062 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008535fd521e688e1160f8f654c4585daa80aa8f99d8e87eccbf4a4ad1d5224f"} err="failed to get container status \"008535fd521e688e1160f8f654c4585daa80aa8f99d8e87eccbf4a4ad1d5224f\": rpc error: code = NotFound desc = could not find container \"008535fd521e688e1160f8f654c4585daa80aa8f99d8e87eccbf4a4ad1d5224f\": container with ID starting with 008535fd521e688e1160f8f654c4585daa80aa8f99d8e87eccbf4a4ad1d5224f not found: ID does not exist" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.795081 4813 scope.go:117] "RemoveContainer" containerID="a9c9c39c578ccc160e22e6c3d9edc861c483fdb9f98e5ce6e49248bded661e84" Nov 25 10:38:14 crc kubenswrapper[4813]: E1125 10:38:14.795747 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c9c39c578ccc160e22e6c3d9edc861c483fdb9f98e5ce6e49248bded661e84\": container with ID starting with a9c9c39c578ccc160e22e6c3d9edc861c483fdb9f98e5ce6e49248bded661e84 not found: ID does not exist" containerID="a9c9c39c578ccc160e22e6c3d9edc861c483fdb9f98e5ce6e49248bded661e84" Nov 25 10:38:14 crc kubenswrapper[4813]: I1125 10:38:14.795778 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c9c39c578ccc160e22e6c3d9edc861c483fdb9f98e5ce6e49248bded661e84"} err="failed to get container status \"a9c9c39c578ccc160e22e6c3d9edc861c483fdb9f98e5ce6e49248bded661e84\": rpc error: code = NotFound desc = could not find container \"a9c9c39c578ccc160e22e6c3d9edc861c483fdb9f98e5ce6e49248bded661e84\": container with ID starting with a9c9c39c578ccc160e22e6c3d9edc861c483fdb9f98e5ce6e49248bded661e84 not found: ID does not exist" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.522065 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" event={"ID":"18df3708-b841-4af2-acb4-de42ed8ec241","Type":"ContainerStarted","Data":"b2c4004dc2865166443f88e7d21c18740b51e3771cba93151a6015b0447aca61"} Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.522424 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.532847 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.560780 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" podStartSLOduration=2.560750666 podStartE2EDuration="2.560750666s" podCreationTimestamp="2025-11-25 10:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:38:15.539559286 +0000 UTC m=+392.669269182" watchObservedRunningTime="2025-11-25 10:38:15.560750666 +0000 UTC m=+392.690460562" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.629026 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="070d675e-0557-4a9e-9a9a-1a5019547e2a" path="/var/lib/kubelet/pods/070d675e-0557-4a9e-9a9a-1a5019547e2a/volumes" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.629828 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="302f6a62-c67c-48ef-97bc-9b53cdf5f67e" path="/var/lib/kubelet/pods/302f6a62-c67c-48ef-97bc-9b53cdf5f67e/volumes" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.630331 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40aa702b-1c32-45ad-ba16-ae04f8da0675" path="/var/lib/kubelet/pods/40aa702b-1c32-45ad-ba16-ae04f8da0675/volumes" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.631492 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c9a79a8-32f8-4018-b6e7-a76164389632" path="/var/lib/kubelet/pods/4c9a79a8-32f8-4018-b6e7-a76164389632/volumes" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.632177 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5deac33-30de-491e-94ff-53fe67de0eb8" path="/var/lib/kubelet/pods/a5deac33-30de-491e-94ff-53fe67de0eb8/volumes" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.701955 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lm5fh"] Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702179 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9a79a8-32f8-4018-b6e7-a76164389632" containerName="extract-utilities" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702194 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9a79a8-32f8-4018-b6e7-a76164389632" containerName="extract-utilities" Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702205 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070d675e-0557-4a9e-9a9a-1a5019547e2a" containerName="extract-content" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702213 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="070d675e-0557-4a9e-9a9a-1a5019547e2a" containerName="extract-content" Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702223 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5deac33-30de-491e-94ff-53fe67de0eb8" containerName="registry-server" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702231 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5deac33-30de-491e-94ff-53fe67de0eb8" containerName="registry-server" Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702242 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5deac33-30de-491e-94ff-53fe67de0eb8" containerName="extract-utilities" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702251 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5deac33-30de-491e-94ff-53fe67de0eb8" containerName="extract-utilities" Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702264 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40aa702b-1c32-45ad-ba16-ae04f8da0675" containerName="extract-utilities" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702271 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="40aa702b-1c32-45ad-ba16-ae04f8da0675" containerName="extract-utilities" Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702283 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070d675e-0557-4a9e-9a9a-1a5019547e2a" containerName="extract-utilities" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702291 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="070d675e-0557-4a9e-9a9a-1a5019547e2a" containerName="extract-utilities" Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702300 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9a79a8-32f8-4018-b6e7-a76164389632" containerName="registry-server" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702308 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9a79a8-32f8-4018-b6e7-a76164389632" containerName="registry-server" Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702320 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302f6a62-c67c-48ef-97bc-9b53cdf5f67e" containerName="marketplace-operator" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702328 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="302f6a62-c67c-48ef-97bc-9b53cdf5f67e" containerName="marketplace-operator" Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702337 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070d675e-0557-4a9e-9a9a-1a5019547e2a" containerName="registry-server" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702344 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="070d675e-0557-4a9e-9a9a-1a5019547e2a" containerName="registry-server" Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702354 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40aa702b-1c32-45ad-ba16-ae04f8da0675" containerName="registry-server" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702361 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="40aa702b-1c32-45ad-ba16-ae04f8da0675" containerName="registry-server" Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702371 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40aa702b-1c32-45ad-ba16-ae04f8da0675" containerName="extract-content" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702378 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="40aa702b-1c32-45ad-ba16-ae04f8da0675" containerName="extract-content" Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702387 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5deac33-30de-491e-94ff-53fe67de0eb8" containerName="extract-content" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702395 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5deac33-30de-491e-94ff-53fe67de0eb8" containerName="extract-content" Nov 25 10:38:15 crc kubenswrapper[4813]: E1125 10:38:15.702408 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9a79a8-32f8-4018-b6e7-a76164389632" containerName="extract-content" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702417 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9a79a8-32f8-4018-b6e7-a76164389632" containerName="extract-content" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702511 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9a79a8-32f8-4018-b6e7-a76164389632" containerName="registry-server" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702523 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="070d675e-0557-4a9e-9a9a-1a5019547e2a" containerName="registry-server" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702555 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="40aa702b-1c32-45ad-ba16-ae04f8da0675" containerName="registry-server" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702570 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="302f6a62-c67c-48ef-97bc-9b53cdf5f67e" containerName="marketplace-operator" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.702579 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5deac33-30de-491e-94ff-53fe67de0eb8" containerName="registry-server" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.703360 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.705855 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.721137 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lm5fh"] Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.757030 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016d58b6-5b2a-4ab9-aefa-b8d7d135832a-catalog-content\") pod \"redhat-marketplace-lm5fh\" (UID: \"016d58b6-5b2a-4ab9-aefa-b8d7d135832a\") " pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.757325 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82r6x\" (UniqueName: \"kubernetes.io/projected/016d58b6-5b2a-4ab9-aefa-b8d7d135832a-kube-api-access-82r6x\") pod \"redhat-marketplace-lm5fh\" (UID: \"016d58b6-5b2a-4ab9-aefa-b8d7d135832a\") " pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.757457 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016d58b6-5b2a-4ab9-aefa-b8d7d135832a-utilities\") pod \"redhat-marketplace-lm5fh\" (UID: \"016d58b6-5b2a-4ab9-aefa-b8d7d135832a\") " pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.858663 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016d58b6-5b2a-4ab9-aefa-b8d7d135832a-catalog-content\") pod \"redhat-marketplace-lm5fh\" (UID: \"016d58b6-5b2a-4ab9-aefa-b8d7d135832a\") " pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.858721 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82r6x\" (UniqueName: \"kubernetes.io/projected/016d58b6-5b2a-4ab9-aefa-b8d7d135832a-kube-api-access-82r6x\") pod \"redhat-marketplace-lm5fh\" (UID: \"016d58b6-5b2a-4ab9-aefa-b8d7d135832a\") " pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.858760 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016d58b6-5b2a-4ab9-aefa-b8d7d135832a-utilities\") pod \"redhat-marketplace-lm5fh\" (UID: \"016d58b6-5b2a-4ab9-aefa-b8d7d135832a\") " pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.859332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016d58b6-5b2a-4ab9-aefa-b8d7d135832a-utilities\") pod \"redhat-marketplace-lm5fh\" (UID: \"016d58b6-5b2a-4ab9-aefa-b8d7d135832a\") " pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.859324 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016d58b6-5b2a-4ab9-aefa-b8d7d135832a-catalog-content\") pod \"redhat-marketplace-lm5fh\" (UID: \"016d58b6-5b2a-4ab9-aefa-b8d7d135832a\") " pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.880834 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82r6x\" (UniqueName: \"kubernetes.io/projected/016d58b6-5b2a-4ab9-aefa-b8d7d135832a-kube-api-access-82r6x\") pod \"redhat-marketplace-lm5fh\" (UID: \"016d58b6-5b2a-4ab9-aefa-b8d7d135832a\") " pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.902670 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qx6fv"] Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.905615 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.909512 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.911995 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qx6fv"] Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.959830 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a44cce-b74b-4c4c-ad84-59417afd998c-utilities\") pod \"certified-operators-qx6fv\" (UID: \"20a44cce-b74b-4c4c-ad84-59417afd998c\") " pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.960264 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs6c4\" (UniqueName: \"kubernetes.io/projected/20a44cce-b74b-4c4c-ad84-59417afd998c-kube-api-access-fs6c4\") pod \"certified-operators-qx6fv\" (UID: \"20a44cce-b74b-4c4c-ad84-59417afd998c\") " pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:15 crc kubenswrapper[4813]: I1125 10:38:15.960309 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a44cce-b74b-4c4c-ad84-59417afd998c-catalog-content\") pod \"certified-operators-qx6fv\" (UID: \"20a44cce-b74b-4c4c-ad84-59417afd998c\") " pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:16 crc kubenswrapper[4813]: I1125 10:38:16.021817 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:16 crc kubenswrapper[4813]: I1125 10:38:16.061707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a44cce-b74b-4c4c-ad84-59417afd998c-utilities\") pod \"certified-operators-qx6fv\" (UID: \"20a44cce-b74b-4c4c-ad84-59417afd998c\") " pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:16 crc kubenswrapper[4813]: I1125 10:38:16.061771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs6c4\" (UniqueName: \"kubernetes.io/projected/20a44cce-b74b-4c4c-ad84-59417afd998c-kube-api-access-fs6c4\") pod \"certified-operators-qx6fv\" (UID: \"20a44cce-b74b-4c4c-ad84-59417afd998c\") " pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:16 crc kubenswrapper[4813]: I1125 10:38:16.061837 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a44cce-b74b-4c4c-ad84-59417afd998c-catalog-content\") pod \"certified-operators-qx6fv\" (UID: \"20a44cce-b74b-4c4c-ad84-59417afd998c\") " pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:16 crc kubenswrapper[4813]: I1125 10:38:16.062427 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a44cce-b74b-4c4c-ad84-59417afd998c-catalog-content\") pod \"certified-operators-qx6fv\" (UID: \"20a44cce-b74b-4c4c-ad84-59417afd998c\") " pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:16 crc kubenswrapper[4813]: I1125 10:38:16.062613 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a44cce-b74b-4c4c-ad84-59417afd998c-utilities\") pod \"certified-operators-qx6fv\" (UID: \"20a44cce-b74b-4c4c-ad84-59417afd998c\") " pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:16 crc kubenswrapper[4813]: I1125 10:38:16.080026 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs6c4\" (UniqueName: \"kubernetes.io/projected/20a44cce-b74b-4c4c-ad84-59417afd998c-kube-api-access-fs6c4\") pod \"certified-operators-qx6fv\" (UID: \"20a44cce-b74b-4c4c-ad84-59417afd998c\") " pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:16 crc kubenswrapper[4813]: I1125 10:38:16.225218 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:16 crc kubenswrapper[4813]: I1125 10:38:16.271264 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lm5fh"] Nov 25 10:38:16 crc kubenswrapper[4813]: I1125 10:38:16.413466 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qx6fv"] Nov 25 10:38:16 crc kubenswrapper[4813]: I1125 10:38:16.531861 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qx6fv" event={"ID":"20a44cce-b74b-4c4c-ad84-59417afd998c","Type":"ContainerStarted","Data":"df06e617c96054a41ee431abd2afbb4a01c4c62a600b089f62a095066176c94e"} Nov 25 10:38:16 crc kubenswrapper[4813]: I1125 10:38:16.533256 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lm5fh" event={"ID":"016d58b6-5b2a-4ab9-aefa-b8d7d135832a","Type":"ContainerStarted","Data":"1dcacd5fef1162540cfa59c7df419fb779cdea84a78ed21be6124fb67df6a30f"} Nov 25 10:38:17 crc kubenswrapper[4813]: I1125 10:38:17.544237 4813 generic.go:334] "Generic (PLEG): container finished" podID="20a44cce-b74b-4c4c-ad84-59417afd998c" containerID="0e99bb1ffe2b7d93930fd2223cbd7ad3ce39cee861bbeeb71921f02065e7c36c" exitCode=0 Nov 25 10:38:17 crc kubenswrapper[4813]: I1125 10:38:17.544307 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qx6fv" event={"ID":"20a44cce-b74b-4c4c-ad84-59417afd998c","Type":"ContainerDied","Data":"0e99bb1ffe2b7d93930fd2223cbd7ad3ce39cee861bbeeb71921f02065e7c36c"} Nov 25 10:38:17 crc kubenswrapper[4813]: I1125 10:38:17.547047 4813 generic.go:334] "Generic (PLEG): container finished" podID="016d58b6-5b2a-4ab9-aefa-b8d7d135832a" containerID="fec94b105fdbab241c10e26203a9dd8faf891a8206e8d6bb653d724d31d0af2a" exitCode=0 Nov 25 10:38:17 crc kubenswrapper[4813]: I1125 10:38:17.547243 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lm5fh" event={"ID":"016d58b6-5b2a-4ab9-aefa-b8d7d135832a","Type":"ContainerDied","Data":"fec94b105fdbab241c10e26203a9dd8faf891a8206e8d6bb653d724d31d0af2a"} Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.109759 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-58bv4"] Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.111903 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.115830 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-58bv4"] Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.116897 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.195776 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b78ef5e-95dc-4291-ba87-3543d2e9d670-utilities\") pod \"community-operators-58bv4\" (UID: \"2b78ef5e-95dc-4291-ba87-3543d2e9d670\") " pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.195864 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b78ef5e-95dc-4291-ba87-3543d2e9d670-catalog-content\") pod \"community-operators-58bv4\" (UID: \"2b78ef5e-95dc-4291-ba87-3543d2e9d670\") " pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.195947 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn2k8\" (UniqueName: \"kubernetes.io/projected/2b78ef5e-95dc-4291-ba87-3543d2e9d670-kube-api-access-xn2k8\") pod \"community-operators-58bv4\" (UID: \"2b78ef5e-95dc-4291-ba87-3543d2e9d670\") " pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.297579 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn2k8\" (UniqueName: \"kubernetes.io/projected/2b78ef5e-95dc-4291-ba87-3543d2e9d670-kube-api-access-xn2k8\") pod \"community-operators-58bv4\" (UID: \"2b78ef5e-95dc-4291-ba87-3543d2e9d670\") " pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.297647 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b78ef5e-95dc-4291-ba87-3543d2e9d670-utilities\") pod \"community-operators-58bv4\" (UID: \"2b78ef5e-95dc-4291-ba87-3543d2e9d670\") " pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.297729 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b78ef5e-95dc-4291-ba87-3543d2e9d670-catalog-content\") pod \"community-operators-58bv4\" (UID: \"2b78ef5e-95dc-4291-ba87-3543d2e9d670\") " pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.299035 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b78ef5e-95dc-4291-ba87-3543d2e9d670-catalog-content\") pod \"community-operators-58bv4\" (UID: \"2b78ef5e-95dc-4291-ba87-3543d2e9d670\") " pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.299517 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b78ef5e-95dc-4291-ba87-3543d2e9d670-utilities\") pod \"community-operators-58bv4\" (UID: \"2b78ef5e-95dc-4291-ba87-3543d2e9d670\") " pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.300721 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d7w9c"] Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.302112 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.310860 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7w9c"] Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.314032 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.322041 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn2k8\" (UniqueName: \"kubernetes.io/projected/2b78ef5e-95dc-4291-ba87-3543d2e9d670-kube-api-access-xn2k8\") pod \"community-operators-58bv4\" (UID: \"2b78ef5e-95dc-4291-ba87-3543d2e9d670\") " pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.398682 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/325764bc-72f5-4e98-a157-c04630e4d3ac-catalog-content\") pod \"redhat-operators-d7w9c\" (UID: \"325764bc-72f5-4e98-a157-c04630e4d3ac\") " pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.398755 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q4jl\" (UniqueName: \"kubernetes.io/projected/325764bc-72f5-4e98-a157-c04630e4d3ac-kube-api-access-8q4jl\") pod \"redhat-operators-d7w9c\" (UID: \"325764bc-72f5-4e98-a157-c04630e4d3ac\") " pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.398915 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/325764bc-72f5-4e98-a157-c04630e4d3ac-utilities\") pod \"redhat-operators-d7w9c\" (UID: \"325764bc-72f5-4e98-a157-c04630e4d3ac\") " pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.434954 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.500059 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/325764bc-72f5-4e98-a157-c04630e4d3ac-catalog-content\") pod \"redhat-operators-d7w9c\" (UID: \"325764bc-72f5-4e98-a157-c04630e4d3ac\") " pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.500119 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q4jl\" (UniqueName: \"kubernetes.io/projected/325764bc-72f5-4e98-a157-c04630e4d3ac-kube-api-access-8q4jl\") pod \"redhat-operators-d7w9c\" (UID: \"325764bc-72f5-4e98-a157-c04630e4d3ac\") " pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.500154 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/325764bc-72f5-4e98-a157-c04630e4d3ac-utilities\") pod \"redhat-operators-d7w9c\" (UID: \"325764bc-72f5-4e98-a157-c04630e4d3ac\") " pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.500533 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/325764bc-72f5-4e98-a157-c04630e4d3ac-catalog-content\") pod \"redhat-operators-d7w9c\" (UID: \"325764bc-72f5-4e98-a157-c04630e4d3ac\") " pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.500565 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/325764bc-72f5-4e98-a157-c04630e4d3ac-utilities\") pod \"redhat-operators-d7w9c\" (UID: \"325764bc-72f5-4e98-a157-c04630e4d3ac\") " pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.542143 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q4jl\" (UniqueName: \"kubernetes.io/projected/325764bc-72f5-4e98-a157-c04630e4d3ac-kube-api-access-8q4jl\") pod \"redhat-operators-d7w9c\" (UID: \"325764bc-72f5-4e98-a157-c04630e4d3ac\") " pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.653087 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.835347 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7w9c"] Nov 25 10:38:18 crc kubenswrapper[4813]: W1125 10:38:18.857464 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod325764bc_72f5_4e98_a157_c04630e4d3ac.slice/crio-10b34647e7f957920cfa15c94a7a3ab54660d7c48f570e1fc7cf0b5f26f0b5e9 WatchSource:0}: Error finding container 10b34647e7f957920cfa15c94a7a3ab54660d7c48f570e1fc7cf0b5f26f0b5e9: Status 404 returned error can't find the container with id 10b34647e7f957920cfa15c94a7a3ab54660d7c48f570e1fc7cf0b5f26f0b5e9 Nov 25 10:38:18 crc kubenswrapper[4813]: I1125 10:38:18.873514 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-58bv4"] Nov 25 10:38:18 crc kubenswrapper[4813]: W1125 10:38:18.878518 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b78ef5e_95dc_4291_ba87_3543d2e9d670.slice/crio-da6e6fe3338c7a63d78cdcc69b859fc3ea3f708764182ae3d6032799e4994197 WatchSource:0}: Error finding container da6e6fe3338c7a63d78cdcc69b859fc3ea3f708764182ae3d6032799e4994197: Status 404 returned error can't find the container with id da6e6fe3338c7a63d78cdcc69b859fc3ea3f708764182ae3d6032799e4994197 Nov 25 10:38:19 crc kubenswrapper[4813]: I1125 10:38:19.562621 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7w9c" event={"ID":"325764bc-72f5-4e98-a157-c04630e4d3ac","Type":"ContainerStarted","Data":"10b34647e7f957920cfa15c94a7a3ab54660d7c48f570e1fc7cf0b5f26f0b5e9"} Nov 25 10:38:19 crc kubenswrapper[4813]: I1125 10:38:19.563721 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58bv4" event={"ID":"2b78ef5e-95dc-4291-ba87-3543d2e9d670","Type":"ContainerStarted","Data":"da6e6fe3338c7a63d78cdcc69b859fc3ea3f708764182ae3d6032799e4994197"} Nov 25 10:38:20 crc kubenswrapper[4813]: I1125 10:38:20.572193 4813 generic.go:334] "Generic (PLEG): container finished" podID="325764bc-72f5-4e98-a157-c04630e4d3ac" containerID="a7163a32782f153c7566f8642077b1d931aaa86a5393b2405319e09c84d481ab" exitCode=0 Nov 25 10:38:20 crc kubenswrapper[4813]: I1125 10:38:20.572399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7w9c" event={"ID":"325764bc-72f5-4e98-a157-c04630e4d3ac","Type":"ContainerDied","Data":"a7163a32782f153c7566f8642077b1d931aaa86a5393b2405319e09c84d481ab"} Nov 25 10:38:21 crc kubenswrapper[4813]: I1125 10:38:21.579615 4813 generic.go:334] "Generic (PLEG): container finished" podID="016d58b6-5b2a-4ab9-aefa-b8d7d135832a" containerID="21611080931f5188d64e262d431ccbecd218d4aee023ad3dbad6e0681067f532" exitCode=0 Nov 25 10:38:21 crc kubenswrapper[4813]: I1125 10:38:21.579831 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lm5fh" event={"ID":"016d58b6-5b2a-4ab9-aefa-b8d7d135832a","Type":"ContainerDied","Data":"21611080931f5188d64e262d431ccbecd218d4aee023ad3dbad6e0681067f532"} Nov 25 10:38:21 crc kubenswrapper[4813]: I1125 10:38:21.582947 4813 generic.go:334] "Generic (PLEG): container finished" podID="2b78ef5e-95dc-4291-ba87-3543d2e9d670" containerID="075a39f1b190f2071dcbaef53af7641135b8da92ee8a8223e7bc09640421249d" exitCode=0 Nov 25 10:38:21 crc kubenswrapper[4813]: I1125 10:38:21.583012 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58bv4" event={"ID":"2b78ef5e-95dc-4291-ba87-3543d2e9d670","Type":"ContainerDied","Data":"075a39f1b190f2071dcbaef53af7641135b8da92ee8a8223e7bc09640421249d"} Nov 25 10:38:21 crc kubenswrapper[4813]: I1125 10:38:21.585124 4813 generic.go:334] "Generic (PLEG): container finished" podID="20a44cce-b74b-4c4c-ad84-59417afd998c" containerID="f69723baff2faddb3a823a261b0ff73b2423ab75c383093e7cbadbcc373d9a7d" exitCode=0 Nov 25 10:38:21 crc kubenswrapper[4813]: I1125 10:38:21.585169 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qx6fv" event={"ID":"20a44cce-b74b-4c4c-ad84-59417afd998c","Type":"ContainerDied","Data":"f69723baff2faddb3a823a261b0ff73b2423ab75c383093e7cbadbcc373d9a7d"} Nov 25 10:38:21 crc kubenswrapper[4813]: I1125 10:38:21.967167 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:38:21 crc kubenswrapper[4813]: I1125 10:38:21.967514 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:38:23 crc kubenswrapper[4813]: I1125 10:38:23.598761 4813 generic.go:334] "Generic (PLEG): container finished" podID="325764bc-72f5-4e98-a157-c04630e4d3ac" containerID="7c09745cd1ba48da66d88e6449a92319446fabdc7f24c28146513341261153e9" exitCode=0 Nov 25 10:38:23 crc kubenswrapper[4813]: I1125 10:38:23.598846 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7w9c" event={"ID":"325764bc-72f5-4e98-a157-c04630e4d3ac","Type":"ContainerDied","Data":"7c09745cd1ba48da66d88e6449a92319446fabdc7f24c28146513341261153e9"} Nov 25 10:38:23 crc kubenswrapper[4813]: I1125 10:38:23.604249 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lm5fh" event={"ID":"016d58b6-5b2a-4ab9-aefa-b8d7d135832a","Type":"ContainerStarted","Data":"d5e2e8b4535e9daa3a02634eed4ce7338b09b7a148fff3c8dbe5ef1c7c774326"} Nov 25 10:38:23 crc kubenswrapper[4813]: I1125 10:38:23.607604 4813 generic.go:334] "Generic (PLEG): container finished" podID="2b78ef5e-95dc-4291-ba87-3543d2e9d670" containerID="247a48c6da9be99e7ae15039ae02a1ae4c6f922a6752ea94d7541fc159ee7bf2" exitCode=0 Nov 25 10:38:23 crc kubenswrapper[4813]: I1125 10:38:23.607720 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58bv4" event={"ID":"2b78ef5e-95dc-4291-ba87-3543d2e9d670","Type":"ContainerDied","Data":"247a48c6da9be99e7ae15039ae02a1ae4c6f922a6752ea94d7541fc159ee7bf2"} Nov 25 10:38:23 crc kubenswrapper[4813]: I1125 10:38:23.610279 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qx6fv" event={"ID":"20a44cce-b74b-4c4c-ad84-59417afd998c","Type":"ContainerStarted","Data":"a210df7443a2af45b4fd713706e607cc1ff515d3a41d3efe75c3773d43fba539"} Nov 25 10:38:23 crc kubenswrapper[4813]: I1125 10:38:23.638460 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lm5fh" podStartSLOduration=3.807729958 podStartE2EDuration="8.638443041s" podCreationTimestamp="2025-11-25 10:38:15 +0000 UTC" firstStartedPulling="2025-11-25 10:38:17.552014169 +0000 UTC m=+394.681724065" lastFinishedPulling="2025-11-25 10:38:22.382727262 +0000 UTC m=+399.512437148" observedRunningTime="2025-11-25 10:38:23.636421255 +0000 UTC m=+400.766131161" watchObservedRunningTime="2025-11-25 10:38:23.638443041 +0000 UTC m=+400.768152927" Nov 25 10:38:23 crc kubenswrapper[4813]: I1125 10:38:23.672276 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qx6fv" podStartSLOduration=3.760025976 podStartE2EDuration="8.672251213s" podCreationTimestamp="2025-11-25 10:38:15 +0000 UTC" firstStartedPulling="2025-11-25 10:38:17.553088114 +0000 UTC m=+394.682798000" lastFinishedPulling="2025-11-25 10:38:22.465313351 +0000 UTC m=+399.595023237" observedRunningTime="2025-11-25 10:38:23.671789822 +0000 UTC m=+400.801499718" watchObservedRunningTime="2025-11-25 10:38:23.672251213 +0000 UTC m=+400.801961119" Nov 25 10:38:24 crc kubenswrapper[4813]: I1125 10:38:24.616690 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58bv4" event={"ID":"2b78ef5e-95dc-4291-ba87-3543d2e9d670","Type":"ContainerStarted","Data":"76221f506ef7c0ffcf0574532b6ae2f6728310930db9f7a1bf1d8803bb7d4c85"} Nov 25 10:38:24 crc kubenswrapper[4813]: I1125 10:38:24.618515 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7w9c" event={"ID":"325764bc-72f5-4e98-a157-c04630e4d3ac","Type":"ContainerStarted","Data":"5b438e8d1e2f5e6391b2dfd9e8baa04adb017ef6a6077f1cde35f3596fabcef9"} Nov 25 10:38:24 crc kubenswrapper[4813]: I1125 10:38:24.635600 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-58bv4" podStartSLOduration=4.101202935 podStartE2EDuration="6.635580883s" podCreationTimestamp="2025-11-25 10:38:18 +0000 UTC" firstStartedPulling="2025-11-25 10:38:21.584700835 +0000 UTC m=+398.714410721" lastFinishedPulling="2025-11-25 10:38:24.119078783 +0000 UTC m=+401.248788669" observedRunningTime="2025-11-25 10:38:24.632592464 +0000 UTC m=+401.762302370" watchObservedRunningTime="2025-11-25 10:38:24.635580883 +0000 UTC m=+401.765290779" Nov 25 10:38:24 crc kubenswrapper[4813]: I1125 10:38:24.650315 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d7w9c" podStartSLOduration=3.326140897 podStartE2EDuration="6.650299673s" podCreationTimestamp="2025-11-25 10:38:18 +0000 UTC" firstStartedPulling="2025-11-25 10:38:20.746880586 +0000 UTC m=+397.876590472" lastFinishedPulling="2025-11-25 10:38:24.071039352 +0000 UTC m=+401.200749248" observedRunningTime="2025-11-25 10:38:24.648455721 +0000 UTC m=+401.778165617" watchObservedRunningTime="2025-11-25 10:38:24.650299673 +0000 UTC m=+401.780009559" Nov 25 10:38:26 crc kubenswrapper[4813]: I1125 10:38:26.022781 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:26 crc kubenswrapper[4813]: I1125 10:38:26.023171 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:26 crc kubenswrapper[4813]: I1125 10:38:26.067449 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:26 crc kubenswrapper[4813]: I1125 10:38:26.225881 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:26 crc kubenswrapper[4813]: I1125 10:38:26.226294 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:26 crc kubenswrapper[4813]: I1125 10:38:26.267782 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:28 crc kubenswrapper[4813]: I1125 10:38:28.435437 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:28 crc kubenswrapper[4813]: I1125 10:38:28.435716 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:28 crc kubenswrapper[4813]: I1125 10:38:28.478089 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:28 crc kubenswrapper[4813]: I1125 10:38:28.653269 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:28 crc kubenswrapper[4813]: I1125 10:38:28.653449 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:29 crc kubenswrapper[4813]: I1125 10:38:29.691357 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d7w9c" podUID="325764bc-72f5-4e98-a157-c04630e4d3ac" containerName="registry-server" probeResult="failure" output=< Nov 25 10:38:29 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Nov 25 10:38:29 crc kubenswrapper[4813]: > Nov 25 10:38:36 crc kubenswrapper[4813]: I1125 10:38:36.076275 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lm5fh" Nov 25 10:38:36 crc kubenswrapper[4813]: I1125 10:38:36.269660 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qx6fv" Nov 25 10:38:38 crc kubenswrapper[4813]: I1125 10:38:38.477871 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-58bv4" Nov 25 10:38:38 crc kubenswrapper[4813]: I1125 10:38:38.706919 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:38 crc kubenswrapper[4813]: I1125 10:38:38.757963 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d7w9c" Nov 25 10:38:51 crc kubenswrapper[4813]: I1125 10:38:51.967891 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:38:51 crc kubenswrapper[4813]: I1125 10:38:51.969872 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.100173 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lsksm"] Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.102408 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.126925 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lsksm"] Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.215051 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6b02129b-2e7c-43bb-9215-c545f8b95ee8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.215166 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.215201 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjlvs\" (UniqueName: \"kubernetes.io/projected/6b02129b-2e7c-43bb-9215-c545f8b95ee8-kube-api-access-kjlvs\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.215235 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6b02129b-2e7c-43bb-9215-c545f8b95ee8-bound-sa-token\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.215272 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6b02129b-2e7c-43bb-9215-c545f8b95ee8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.215305 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b02129b-2e7c-43bb-9215-c545f8b95ee8-trusted-ca\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.215613 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6b02129b-2e7c-43bb-9215-c545f8b95ee8-registry-certificates\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.215824 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6b02129b-2e7c-43bb-9215-c545f8b95ee8-registry-tls\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.246540 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.317333 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b02129b-2e7c-43bb-9215-c545f8b95ee8-trusted-ca\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.317435 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6b02129b-2e7c-43bb-9215-c545f8b95ee8-registry-certificates\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.317477 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6b02129b-2e7c-43bb-9215-c545f8b95ee8-registry-tls\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.317532 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6b02129b-2e7c-43bb-9215-c545f8b95ee8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.317566 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjlvs\" (UniqueName: \"kubernetes.io/projected/6b02129b-2e7c-43bb-9215-c545f8b95ee8-kube-api-access-kjlvs\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.317778 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6b02129b-2e7c-43bb-9215-c545f8b95ee8-bound-sa-token\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.317933 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6b02129b-2e7c-43bb-9215-c545f8b95ee8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.318400 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6b02129b-2e7c-43bb-9215-c545f8b95ee8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.319397 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b02129b-2e7c-43bb-9215-c545f8b95ee8-trusted-ca\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.319766 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6b02129b-2e7c-43bb-9215-c545f8b95ee8-registry-certificates\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.325457 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6b02129b-2e7c-43bb-9215-c545f8b95ee8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.325535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6b02129b-2e7c-43bb-9215-c545f8b95ee8-registry-tls\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.339595 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjlvs\" (UniqueName: \"kubernetes.io/projected/6b02129b-2e7c-43bb-9215-c545f8b95ee8-kube-api-access-kjlvs\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.340710 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6b02129b-2e7c-43bb-9215-c545f8b95ee8-bound-sa-token\") pod \"image-registry-66df7c8f76-lsksm\" (UID: \"6b02129b-2e7c-43bb-9215-c545f8b95ee8\") " pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.422335 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.639672 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lsksm"] Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.922933 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" event={"ID":"6b02129b-2e7c-43bb-9215-c545f8b95ee8","Type":"ContainerStarted","Data":"2a38283cb2944b6db0e331dc388b0b54cf410c7140caa00d2a04f69164aaf84a"} Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.922993 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" event={"ID":"6b02129b-2e7c-43bb-9215-c545f8b95ee8","Type":"ContainerStarted","Data":"9fbb49318b8435da2339b497495920d319c1aa121893027449dc39e1f5f01101"} Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.923159 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:16 crc kubenswrapper[4813]: I1125 10:39:16.951980 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" podStartSLOduration=0.951954599 podStartE2EDuration="951.954599ms" podCreationTimestamp="2025-11-25 10:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:39:16.950134427 +0000 UTC m=+454.079844333" watchObservedRunningTime="2025-11-25 10:39:16.951954599 +0000 UTC m=+454.081664485" Nov 25 10:39:21 crc kubenswrapper[4813]: I1125 10:39:21.967761 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:39:21 crc kubenswrapper[4813]: I1125 10:39:21.968712 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:39:21 crc kubenswrapper[4813]: I1125 10:39:21.968806 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:39:21 crc kubenswrapper[4813]: I1125 10:39:21.969815 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd431f86e1d06ef8a5974d3c06c01f9d692e72510667f2b9fb8a07e82ce4af6d"} pod="openshift-machine-config-operator/machine-config-daemon-knhz8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:39:21 crc kubenswrapper[4813]: I1125 10:39:21.970004 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" containerID="cri-o://fd431f86e1d06ef8a5974d3c06c01f9d692e72510667f2b9fb8a07e82ce4af6d" gracePeriod=600 Nov 25 10:39:22 crc kubenswrapper[4813]: I1125 10:39:22.962462 4813 generic.go:334] "Generic (PLEG): container finished" podID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerID="fd431f86e1d06ef8a5974d3c06c01f9d692e72510667f2b9fb8a07e82ce4af6d" exitCode=0 Nov 25 10:39:22 crc kubenswrapper[4813]: I1125 10:39:22.963337 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerDied","Data":"fd431f86e1d06ef8a5974d3c06c01f9d692e72510667f2b9fb8a07e82ce4af6d"} Nov 25 10:39:22 crc kubenswrapper[4813]: I1125 10:39:22.963391 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerStarted","Data":"cb4d567d43fddcd717213e7940966b7b25b43b79bdbef12af12d619770788967"} Nov 25 10:39:22 crc kubenswrapper[4813]: I1125 10:39:22.963427 4813 scope.go:117] "RemoveContainer" containerID="c16599a2b18976267f55176085b4b11e3e253e308707081d06d28d64f4dbb627" Nov 25 10:39:36 crc kubenswrapper[4813]: I1125 10:39:36.428110 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-lsksm" Nov 25 10:39:36 crc kubenswrapper[4813]: I1125 10:39:36.484341 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wctdv"] Nov 25 10:40:01 crc kubenswrapper[4813]: I1125 10:40:01.530296 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" podUID="89fdb811-5cae-4ece-a672-207a7af34036" containerName="registry" containerID="cri-o://d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b" gracePeriod=30 Nov 25 10:40:01 crc kubenswrapper[4813]: I1125 10:40:01.980625 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.154534 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-bound-sa-token\") pod \"89fdb811-5cae-4ece-a672-207a7af34036\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.154614 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/89fdb811-5cae-4ece-a672-207a7af34036-ca-trust-extracted\") pod \"89fdb811-5cae-4ece-a672-207a7af34036\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.154830 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"89fdb811-5cae-4ece-a672-207a7af34036\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.154902 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/89fdb811-5cae-4ece-a672-207a7af34036-installation-pull-secrets\") pod \"89fdb811-5cae-4ece-a672-207a7af34036\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.154940 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/89fdb811-5cae-4ece-a672-207a7af34036-registry-certificates\") pod \"89fdb811-5cae-4ece-a672-207a7af34036\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.154994 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-registry-tls\") pod \"89fdb811-5cae-4ece-a672-207a7af34036\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.155032 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89fdb811-5cae-4ece-a672-207a7af34036-trusted-ca\") pod \"89fdb811-5cae-4ece-a672-207a7af34036\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.155088 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cvt2\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-kube-api-access-6cvt2\") pod \"89fdb811-5cae-4ece-a672-207a7af34036\" (UID: \"89fdb811-5cae-4ece-a672-207a7af34036\") " Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.157857 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89fdb811-5cae-4ece-a672-207a7af34036-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "89fdb811-5cae-4ece-a672-207a7af34036" (UID: "89fdb811-5cae-4ece-a672-207a7af34036"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.157906 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89fdb811-5cae-4ece-a672-207a7af34036-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "89fdb811-5cae-4ece-a672-207a7af34036" (UID: "89fdb811-5cae-4ece-a672-207a7af34036"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.163967 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "89fdb811-5cae-4ece-a672-207a7af34036" (UID: "89fdb811-5cae-4ece-a672-207a7af34036"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.164530 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fdb811-5cae-4ece-a672-207a7af34036-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "89fdb811-5cae-4ece-a672-207a7af34036" (UID: "89fdb811-5cae-4ece-a672-207a7af34036"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.164870 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-kube-api-access-6cvt2" (OuterVolumeSpecName: "kube-api-access-6cvt2") pod "89fdb811-5cae-4ece-a672-207a7af34036" (UID: "89fdb811-5cae-4ece-a672-207a7af34036"). InnerVolumeSpecName "kube-api-access-6cvt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.174220 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "89fdb811-5cae-4ece-a672-207a7af34036" (UID: "89fdb811-5cae-4ece-a672-207a7af34036"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.179598 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "89fdb811-5cae-4ece-a672-207a7af34036" (UID: "89fdb811-5cae-4ece-a672-207a7af34036"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.185523 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89fdb811-5cae-4ece-a672-207a7af34036-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "89fdb811-5cae-4ece-a672-207a7af34036" (UID: "89fdb811-5cae-4ece-a672-207a7af34036"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.251355 4813 generic.go:334] "Generic (PLEG): container finished" podID="89fdb811-5cae-4ece-a672-207a7af34036" containerID="d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b" exitCode=0 Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.251429 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" event={"ID":"89fdb811-5cae-4ece-a672-207a7af34036","Type":"ContainerDied","Data":"d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b"} Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.251445 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.251489 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wctdv" event={"ID":"89fdb811-5cae-4ece-a672-207a7af34036","Type":"ContainerDied","Data":"27d28b71d02c039ac11b0c27af575c525fb1ff40e7e02d67ba462d88578c6295"} Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.251525 4813 scope.go:117] "RemoveContainer" containerID="d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.257102 4813 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/89fdb811-5cae-4ece-a672-207a7af34036-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.257140 4813 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/89fdb811-5cae-4ece-a672-207a7af34036-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.257154 4813 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.257166 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89fdb811-5cae-4ece-a672-207a7af34036-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.257182 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cvt2\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-kube-api-access-6cvt2\") on node \"crc\" DevicePath \"\"" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.257193 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89fdb811-5cae-4ece-a672-207a7af34036-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.257203 4813 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/89fdb811-5cae-4ece-a672-207a7af34036-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.284373 4813 scope.go:117] "RemoveContainer" containerID="d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b" Nov 25 10:40:02 crc kubenswrapper[4813]: E1125 10:40:02.286988 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b\": container with ID starting with d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b not found: ID does not exist" containerID="d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.287192 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b"} err="failed to get container status \"d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b\": rpc error: code = NotFound desc = could not find container \"d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b\": container with ID starting with d3ae5f6421a65a5f520e338c6f3e8f646931a32d7c8126d2c83fd47e387a8f9b not found: ID does not exist" Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.306447 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wctdv"] Nov 25 10:40:02 crc kubenswrapper[4813]: I1125 10:40:02.309822 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wctdv"] Nov 25 10:40:03 crc kubenswrapper[4813]: I1125 10:40:03.631949 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89fdb811-5cae-4ece-a672-207a7af34036" path="/var/lib/kubelet/pods/89fdb811-5cae-4ece-a672-207a7af34036/volumes" Nov 25 10:41:51 crc kubenswrapper[4813]: I1125 10:41:51.967597 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:41:51 crc kubenswrapper[4813]: I1125 10:41:51.968401 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:42:22 crc kubenswrapper[4813]: I1125 10:42:22.416706 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:42:22 crc kubenswrapper[4813]: I1125 10:42:22.417494 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:42:51 crc kubenswrapper[4813]: I1125 10:42:51.967545 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:42:51 crc kubenswrapper[4813]: I1125 10:42:51.968167 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:42:51 crc kubenswrapper[4813]: I1125 10:42:51.968221 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:42:51 crc kubenswrapper[4813]: I1125 10:42:51.969033 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb4d567d43fddcd717213e7940966b7b25b43b79bdbef12af12d619770788967"} pod="openshift-machine-config-operator/machine-config-daemon-knhz8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:42:51 crc kubenswrapper[4813]: I1125 10:42:51.969093 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" containerID="cri-o://cb4d567d43fddcd717213e7940966b7b25b43b79bdbef12af12d619770788967" gracePeriod=600 Nov 25 10:42:52 crc kubenswrapper[4813]: I1125 10:42:52.621748 4813 generic.go:334] "Generic (PLEG): container finished" podID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerID="cb4d567d43fddcd717213e7940966b7b25b43b79bdbef12af12d619770788967" exitCode=0 Nov 25 10:42:52 crc kubenswrapper[4813]: I1125 10:42:52.621811 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerDied","Data":"cb4d567d43fddcd717213e7940966b7b25b43b79bdbef12af12d619770788967"} Nov 25 10:42:52 crc kubenswrapper[4813]: I1125 10:42:52.622351 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerStarted","Data":"94199ba3a0acbc10bf1b1d8a9e55614a98ff3a435215d0c63b967639b76f1985"} Nov 25 10:42:52 crc kubenswrapper[4813]: I1125 10:42:52.622434 4813 scope.go:117] "RemoveContainer" containerID="fd431f86e1d06ef8a5974d3c06c01f9d692e72510667f2b9fb8a07e82ce4af6d" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.305822 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-9bjpb"] Nov 25 10:43:45 crc kubenswrapper[4813]: E1125 10:43:45.306728 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fdb811-5cae-4ece-a672-207a7af34036" containerName="registry" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.306741 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fdb811-5cae-4ece-a672-207a7af34036" containerName="registry" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.306849 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fdb811-5cae-4ece-a672-207a7af34036" containerName="registry" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.307309 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.309961 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.310241 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-r77fj" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.312085 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.328098 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ds4rg"] Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.329233 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-ds4rg" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.331279 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jb89b" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.331993 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-9bjpb"] Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.335500 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd5dn\" (UniqueName: \"kubernetes.io/projected/396645a8-bd9a-429a-8d95-33dcec24c4ba-kube-api-access-nd5dn\") pod \"cert-manager-cainjector-7f985d654d-9bjpb\" (UID: \"396645a8-bd9a-429a-8d95-33dcec24c4ba\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.336202 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-74f7x"] Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.336956 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-74f7x" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.339440 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dw4ph" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.343987 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ds4rg"] Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.356287 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-74f7x"] Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.437577 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd5dn\" (UniqueName: \"kubernetes.io/projected/396645a8-bd9a-429a-8d95-33dcec24c4ba-kube-api-access-nd5dn\") pod \"cert-manager-cainjector-7f985d654d-9bjpb\" (UID: \"396645a8-bd9a-429a-8d95-33dcec24c4ba\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.437889 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vslf\" (UniqueName: \"kubernetes.io/projected/d25f3a31-9925-4bbb-959f-be2a544fca3a-kube-api-access-9vslf\") pod \"cert-manager-webhook-5655c58dd6-74f7x\" (UID: \"d25f3a31-9925-4bbb-959f-be2a544fca3a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-74f7x" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.437930 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv49w\" (UniqueName: \"kubernetes.io/projected/ee2b9b30-2c9f-4c88-b31b-a20957e03939-kube-api-access-xv49w\") pod \"cert-manager-5b446d88c5-ds4rg\" (UID: \"ee2b9b30-2c9f-4c88-b31b-a20957e03939\") " pod="cert-manager/cert-manager-5b446d88c5-ds4rg" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.458351 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd5dn\" (UniqueName: \"kubernetes.io/projected/396645a8-bd9a-429a-8d95-33dcec24c4ba-kube-api-access-nd5dn\") pod \"cert-manager-cainjector-7f985d654d-9bjpb\" (UID: \"396645a8-bd9a-429a-8d95-33dcec24c4ba\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.539421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vslf\" (UniqueName: \"kubernetes.io/projected/d25f3a31-9925-4bbb-959f-be2a544fca3a-kube-api-access-9vslf\") pod \"cert-manager-webhook-5655c58dd6-74f7x\" (UID: \"d25f3a31-9925-4bbb-959f-be2a544fca3a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-74f7x" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.539480 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv49w\" (UniqueName: \"kubernetes.io/projected/ee2b9b30-2c9f-4c88-b31b-a20957e03939-kube-api-access-xv49w\") pod \"cert-manager-5b446d88c5-ds4rg\" (UID: \"ee2b9b30-2c9f-4c88-b31b-a20957e03939\") " pod="cert-manager/cert-manager-5b446d88c5-ds4rg" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.556904 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv49w\" (UniqueName: \"kubernetes.io/projected/ee2b9b30-2c9f-4c88-b31b-a20957e03939-kube-api-access-xv49w\") pod \"cert-manager-5b446d88c5-ds4rg\" (UID: \"ee2b9b30-2c9f-4c88-b31b-a20957e03939\") " pod="cert-manager/cert-manager-5b446d88c5-ds4rg" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.557733 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vslf\" (UniqueName: \"kubernetes.io/projected/d25f3a31-9925-4bbb-959f-be2a544fca3a-kube-api-access-9vslf\") pod \"cert-manager-webhook-5655c58dd6-74f7x\" (UID: \"d25f3a31-9925-4bbb-959f-be2a544fca3a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-74f7x" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.625090 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.650120 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-ds4rg" Nov 25 10:43:45 crc kubenswrapper[4813]: I1125 10:43:45.656495 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-74f7x" Nov 25 10:43:46 crc kubenswrapper[4813]: I1125 10:43:46.085403 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-9bjpb"] Nov 25 10:43:46 crc kubenswrapper[4813]: I1125 10:43:46.095494 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 10:43:46 crc kubenswrapper[4813]: I1125 10:43:46.148317 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ds4rg"] Nov 25 10:43:46 crc kubenswrapper[4813]: W1125 10:43:46.153083 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee2b9b30_2c9f_4c88_b31b_a20957e03939.slice/crio-fd37673bb8c886e31fd669f1d0bf141f3d462a19e771e90b869854a189ae53f0 WatchSource:0}: Error finding container fd37673bb8c886e31fd669f1d0bf141f3d462a19e771e90b869854a189ae53f0: Status 404 returned error can't find the container with id fd37673bb8c886e31fd669f1d0bf141f3d462a19e771e90b869854a189ae53f0 Nov 25 10:43:46 crc kubenswrapper[4813]: I1125 10:43:46.155908 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-74f7x"] Nov 25 10:43:46 crc kubenswrapper[4813]: W1125 10:43:46.160612 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd25f3a31_9925_4bbb_959f_be2a544fca3a.slice/crio-37d3e195deaf6607b655fd38bf074771402d518b17ede123a3774554eebf98f0 WatchSource:0}: Error finding container 37d3e195deaf6607b655fd38bf074771402d518b17ede123a3774554eebf98f0: Status 404 returned error can't find the container with id 37d3e195deaf6607b655fd38bf074771402d518b17ede123a3774554eebf98f0 Nov 25 10:43:46 crc kubenswrapper[4813]: I1125 10:43:46.952636 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-74f7x" event={"ID":"d25f3a31-9925-4bbb-959f-be2a544fca3a","Type":"ContainerStarted","Data":"37d3e195deaf6607b655fd38bf074771402d518b17ede123a3774554eebf98f0"} Nov 25 10:43:46 crc kubenswrapper[4813]: I1125 10:43:46.954199 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" event={"ID":"396645a8-bd9a-429a-8d95-33dcec24c4ba","Type":"ContainerStarted","Data":"45b6471bdb0d1c76539d72582afd0ca6e90f77b2fdfc4f375fc51452685549ab"} Nov 25 10:43:46 crc kubenswrapper[4813]: I1125 10:43:46.955429 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ds4rg" event={"ID":"ee2b9b30-2c9f-4c88-b31b-a20957e03939","Type":"ContainerStarted","Data":"fd37673bb8c886e31fd669f1d0bf141f3d462a19e771e90b869854a189ae53f0"} Nov 25 10:43:50 crc kubenswrapper[4813]: I1125 10:43:50.981617 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ds4rg" event={"ID":"ee2b9b30-2c9f-4c88-b31b-a20957e03939","Type":"ContainerStarted","Data":"16e43b42c5f957dba2601e0a03858cb3669954b1ce432af6dcbef18f6448b299"} Nov 25 10:43:50 crc kubenswrapper[4813]: I1125 10:43:50.983826 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-74f7x" event={"ID":"d25f3a31-9925-4bbb-959f-be2a544fca3a","Type":"ContainerStarted","Data":"af1ef0c4941ce07c1b08aecab219e08425e41b1d6cd189b013ade451747eb8d9"} Nov 25 10:43:50 crc kubenswrapper[4813]: I1125 10:43:50.983952 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-74f7x" Nov 25 10:43:50 crc kubenswrapper[4813]: I1125 10:43:50.986227 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" event={"ID":"396645a8-bd9a-429a-8d95-33dcec24c4ba","Type":"ContainerStarted","Data":"822dffefb9c96fe8bc81964af2660bbeeaa2e42c111e9ef90f07aa0ab79f0822"} Nov 25 10:43:50 crc kubenswrapper[4813]: I1125 10:43:50.997151 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-ds4rg" podStartSLOduration=2.12942421 podStartE2EDuration="5.997126008s" podCreationTimestamp="2025-11-25 10:43:45 +0000 UTC" firstStartedPulling="2025-11-25 10:43:46.156520737 +0000 UTC m=+723.286230623" lastFinishedPulling="2025-11-25 10:43:50.024222535 +0000 UTC m=+727.153932421" observedRunningTime="2025-11-25 10:43:50.995657356 +0000 UTC m=+728.125367252" watchObservedRunningTime="2025-11-25 10:43:50.997126008 +0000 UTC m=+728.126835894" Nov 25 10:43:51 crc kubenswrapper[4813]: I1125 10:43:51.019194 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-74f7x" podStartSLOduration=2.16771497 podStartE2EDuration="6.019159306s" podCreationTimestamp="2025-11-25 10:43:45 +0000 UTC" firstStartedPulling="2025-11-25 10:43:46.163765813 +0000 UTC m=+723.293475699" lastFinishedPulling="2025-11-25 10:43:50.015210149 +0000 UTC m=+727.144920035" observedRunningTime="2025-11-25 10:43:51.012384013 +0000 UTC m=+728.142093899" watchObservedRunningTime="2025-11-25 10:43:51.019159306 +0000 UTC m=+728.148869202" Nov 25 10:43:55 crc kubenswrapper[4813]: I1125 10:43:55.663966 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" podStartSLOduration=6.668553297 podStartE2EDuration="10.663936375s" podCreationTimestamp="2025-11-25 10:43:45 +0000 UTC" firstStartedPulling="2025-11-25 10:43:46.095170078 +0000 UTC m=+723.224879964" lastFinishedPulling="2025-11-25 10:43:50.090553156 +0000 UTC m=+727.220263042" observedRunningTime="2025-11-25 10:43:51.029524232 +0000 UTC m=+728.159234128" watchObservedRunningTime="2025-11-25 10:43:55.663936375 +0000 UTC m=+732.793646271" Nov 25 10:43:55 crc kubenswrapper[4813]: I1125 10:43:55.671473 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8s5k7"] Nov 25 10:43:55 crc kubenswrapper[4813]: I1125 10:43:55.674240 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="northd" containerID="cri-o://1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e" gracePeriod=30 Nov 25 10:43:55 crc kubenswrapper[4813]: I1125 10:43:55.674477 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789" gracePeriod=30 Nov 25 10:43:55 crc kubenswrapper[4813]: I1125 10:43:55.674552 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="sbdb" containerID="cri-o://32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d" gracePeriod=30 Nov 25 10:43:55 crc kubenswrapper[4813]: I1125 10:43:55.674571 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="nbdb" containerID="cri-o://ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937" gracePeriod=30 Nov 25 10:43:55 crc kubenswrapper[4813]: I1125 10:43:55.674991 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="kube-rbac-proxy-node" containerID="cri-o://d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6" gracePeriod=30 Nov 25 10:43:55 crc kubenswrapper[4813]: I1125 10:43:55.675023 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovn-acl-logging" containerID="cri-o://7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d" gracePeriod=30 Nov 25 10:43:55 crc kubenswrapper[4813]: I1125 10:43:55.675042 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovn-controller" containerID="cri-o://7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f" gracePeriod=30 Nov 25 10:43:55 crc kubenswrapper[4813]: I1125 10:43:55.679736 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-74f7x" Nov 25 10:43:55 crc kubenswrapper[4813]: I1125 10:43:55.746355 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" containerID="cri-o://0e445d1b17b17b79ca73cab7e0b8c0fde1cee7996193a9b5e3155593909b4a3a" gracePeriod=30 Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.017705 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlpbx_98439068-3c89-4c1b-8bb8-8aa848ef0cd3/kube-multus/2.log" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.018330 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlpbx_98439068-3c89-4c1b-8bb8-8aa848ef0cd3/kube-multus/1.log" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.018388 4813 generic.go:334] "Generic (PLEG): container finished" podID="98439068-3c89-4c1b-8bb8-8aa848ef0cd3" containerID="697fb46d168c6582c121e2351076bc5ac6817cf08da2f08b3927d576bbf35525" exitCode=2 Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.018452 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rlpbx" event={"ID":"98439068-3c89-4c1b-8bb8-8aa848ef0cd3","Type":"ContainerDied","Data":"697fb46d168c6582c121e2351076bc5ac6817cf08da2f08b3927d576bbf35525"} Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.018505 4813 scope.go:117] "RemoveContainer" containerID="e45d1cfd847d1fbd71b9790ea8725a76ffc6117b372d227e921dad0143f7b30c" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.019054 4813 scope.go:117] "RemoveContainer" containerID="697fb46d168c6582c121e2351076bc5ac6817cf08da2f08b3927d576bbf35525" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.024514 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/3.log" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.031565 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovn-acl-logging/0.log" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.031603 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovnkube-controller/3.log" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.034482 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovn-controller/0.log" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036173 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="0e445d1b17b17b79ca73cab7e0b8c0fde1cee7996193a9b5e3155593909b4a3a" exitCode=0 Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036211 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d" exitCode=0 Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036218 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937" exitCode=0 Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036227 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e" exitCode=0 Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036235 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789" exitCode=0 Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036242 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6" exitCode=0 Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036248 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d" exitCode=143 Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036257 4813 generic.go:334] "Generic (PLEG): container finished" podID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerID="7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f" exitCode=143 Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036283 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"0e445d1b17b17b79ca73cab7e0b8c0fde1cee7996193a9b5e3155593909b4a3a"} Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036320 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d"} Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036332 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937"} Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036342 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e"} Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036350 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789"} Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036359 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6"} Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036370 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d"} Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.036379 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f"} Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.038063 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovn-acl-logging/0.log" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.038815 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovn-controller/0.log" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.039996 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.048845 4813 scope.go:117] "RemoveContainer" containerID="c47a786668d4e29437970008a1e91d74d92c964ba10a6eba1f8d405d05a26e7b" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.094841 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ql2rj"] Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.095914 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovn-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.095932 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovn-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.095942 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovn-acl-logging" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.095948 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovn-acl-logging" Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.095957 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.095964 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.095974 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.095980 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.095986 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.095993 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.096001 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="northd" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096006 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="northd" Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.096014 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="sbdb" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096019 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="sbdb" Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.096029 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096035 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.096041 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="kube-rbac-proxy-node" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096047 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="kube-rbac-proxy-node" Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.096057 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096062 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.096070 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="kubecfg-setup" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096076 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="kubecfg-setup" Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.096082 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="nbdb" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096087 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="nbdb" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096179 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovn-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096189 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096196 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096204 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="northd" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096212 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="sbdb" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096220 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovn-acl-logging" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096228 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096236 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="nbdb" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096244 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="kube-rbac-proxy-node" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096251 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: E1125 10:43:56.096333 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096340 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096422 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.096588 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" containerName="ovnkube-controller" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.098009 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.116672 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovn-node-metrics-cert\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.116782 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-log-socket\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.116830 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-slash\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.116859 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-run-netns\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.116903 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-var-lib-openvswitch\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.116952 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-cni-bin\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.116958 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-log-socket" (OuterVolumeSpecName: "log-socket") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117003 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovnkube-config\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117026 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-systemd-units\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117032 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117068 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-slash" (OuterVolumeSpecName: "host-slash") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117099 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117107 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-env-overrides\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117127 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117171 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-systemd\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117193 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-kubelet\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117271 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-ovn\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117327 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-openvswitch\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117352 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-etc-openvswitch\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117416 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svkcf\" (UniqueName: \"kubernetes.io/projected/8460ec76-ba89-4f8f-9055-d7274ab52d11-kube-api-access-svkcf\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117485 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117517 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovnkube-script-lib\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117567 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-run-ovn-kubernetes\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117592 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-cni-netd\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117635 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-node-log\") pod \"8460ec76-ba89-4f8f-9055-d7274ab52d11\" (UID: \"8460ec76-ba89-4f8f-9055-d7274ab52d11\") " Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117663 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117846 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.117882 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.118116 4813 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.118156 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.118167 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.118193 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.118215 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-node-log" (OuterVolumeSpecName: "node-log") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.118216 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.118253 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.118291 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.118645 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.119403 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.119737 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.119757 4813 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.120034 4813 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.120057 4813 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-log-socket\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.120069 4813 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-slash\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.120107 4813 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.120120 4813 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.124742 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.125327 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8460ec76-ba89-4f8f-9055-d7274ab52d11-kube-api-access-svkcf" (OuterVolumeSpecName: "kube-api-access-svkcf") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "kube-api-access-svkcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.133108 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8460ec76-ba89-4f8f-9055-d7274ab52d11" (UID: "8460ec76-ba89-4f8f-9055-d7274ab52d11"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.221416 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-run-ovn\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.221477 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af77c760-35e1-44c7-9118-3eb2ec12d6af-env-overrides\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.221502 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-node-log\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.221532 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-run-netns\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.221762 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-cni-bin\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.221783 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-cni-netd\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.221804 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-run-ovn-kubernetes\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.221827 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-kubelet\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.221847 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.221964 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af77c760-35e1-44c7-9118-3eb2ec12d6af-ovnkube-config\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222040 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-var-lib-openvswitch\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-run-systemd\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222215 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-log-socket\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222244 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-etc-openvswitch\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222283 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af77c760-35e1-44c7-9118-3eb2ec12d6af-ovnkube-script-lib\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222343 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-systemd-units\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222407 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af77c760-35e1-44c7-9118-3eb2ec12d6af-ovn-node-metrics-cert\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222526 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-slash\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222554 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-run-openvswitch\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222585 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch25z\" (UniqueName: \"kubernetes.io/projected/af77c760-35e1-44c7-9118-3eb2ec12d6af-kube-api-access-ch25z\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222673 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222710 4813 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222722 4813 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222735 4813 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222747 4813 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222762 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svkcf\" (UniqueName: \"kubernetes.io/projected/8460ec76-ba89-4f8f-9055-d7274ab52d11-kube-api-access-svkcf\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222776 4813 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222788 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222800 4813 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222812 4813 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222823 4813 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8460ec76-ba89-4f8f-9055-d7274ab52d11-node-log\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.222834 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8460ec76-ba89-4f8f-9055-d7274ab52d11-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324154 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-slash\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324209 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-run-openvswitch\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324232 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch25z\" (UniqueName: \"kubernetes.io/projected/af77c760-35e1-44c7-9118-3eb2ec12d6af-kube-api-access-ch25z\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324253 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-run-ovn\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324271 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af77c760-35e1-44c7-9118-3eb2ec12d6af-env-overrides\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324291 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-node-log\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324308 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-run-netns\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324312 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-slash\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324363 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-cni-bin\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324392 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-node-log\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324327 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-cni-bin\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324420 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-cni-netd\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324437 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-run-netns\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324439 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-run-ovn-kubernetes\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324419 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-run-openvswitch\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324466 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-kubelet\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324467 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-cni-netd\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324506 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324428 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-run-ovn\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324486 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324476 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-run-ovn-kubernetes\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324494 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-host-kubelet\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324568 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af77c760-35e1-44c7-9118-3eb2ec12d6af-ovnkube-config\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-var-lib-openvswitch\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324644 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-run-systemd\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-log-socket\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324693 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-var-lib-openvswitch\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324708 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-etc-openvswitch\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-run-systemd\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324775 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-log-socket\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324730 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-etc-openvswitch\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324788 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af77c760-35e1-44c7-9118-3eb2ec12d6af-ovnkube-script-lib\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-systemd-units\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324894 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af77c760-35e1-44c7-9118-3eb2ec12d6af-ovn-node-metrics-cert\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.324944 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/af77c760-35e1-44c7-9118-3eb2ec12d6af-systemd-units\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.325410 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af77c760-35e1-44c7-9118-3eb2ec12d6af-ovnkube-config\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.325463 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af77c760-35e1-44c7-9118-3eb2ec12d6af-ovnkube-script-lib\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.325607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af77c760-35e1-44c7-9118-3eb2ec12d6af-env-overrides\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.328162 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af77c760-35e1-44c7-9118-3eb2ec12d6af-ovn-node-metrics-cert\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.340764 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch25z\" (UniqueName: \"kubernetes.io/projected/af77c760-35e1-44c7-9118-3eb2ec12d6af-kube-api-access-ch25z\") pod \"ovnkube-node-ql2rj\" (UID: \"af77c760-35e1-44c7-9118-3eb2ec12d6af\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:56 crc kubenswrapper[4813]: I1125 10:43:56.421778 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.044995 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovn-acl-logging/0.log" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.046398 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8s5k7_8460ec76-ba89-4f8f-9055-d7274ab52d11/ovn-controller/0.log" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.046869 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" event={"ID":"8460ec76-ba89-4f8f-9055-d7274ab52d11","Type":"ContainerDied","Data":"94c2e058adc2b124baf2d5fc38723175acfb89906c9f5397e682f8bf1c617b0c"} Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.046925 4813 scope.go:117] "RemoveContainer" containerID="0e445d1b17b17b79ca73cab7e0b8c0fde1cee7996193a9b5e3155593909b4a3a" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.046995 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8s5k7" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.048827 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlpbx_98439068-3c89-4c1b-8bb8-8aa848ef0cd3/kube-multus/2.log" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.048897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rlpbx" event={"ID":"98439068-3c89-4c1b-8bb8-8aa848ef0cd3","Type":"ContainerStarted","Data":"dca3707200f2d1f69aba108a306f60fe596dbb8b3c7b187809477f887e75eb99"} Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.050552 4813 generic.go:334] "Generic (PLEG): container finished" podID="af77c760-35e1-44c7-9118-3eb2ec12d6af" containerID="f2e652cef458a532e1c8a08066cb5d3bbc914ea216cfaa01890a56d166b3451e" exitCode=0 Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.050624 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" event={"ID":"af77c760-35e1-44c7-9118-3eb2ec12d6af","Type":"ContainerDied","Data":"f2e652cef458a532e1c8a08066cb5d3bbc914ea216cfaa01890a56d166b3451e"} Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.050785 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" event={"ID":"af77c760-35e1-44c7-9118-3eb2ec12d6af","Type":"ContainerStarted","Data":"26330b2f7de7f60540201083180627322a993995d2544c0d62e83abd4e13f216"} Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.071844 4813 scope.go:117] "RemoveContainer" containerID="32898e756d7697bcb5b6ae6780b7b752be67b44b9ce8c2f2459477c7f0b0a28d" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.090602 4813 scope.go:117] "RemoveContainer" containerID="ee35613ff013fdd9f9ba4aa81006a99cd328ab65010b9b337815829bfcc88937" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.112179 4813 scope.go:117] "RemoveContainer" containerID="1581fa41d3a426258f7c464d5e0f2ad431917ccec0616d26bb8b0affa320c90e" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.123990 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8s5k7"] Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.135671 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8s5k7"] Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.140602 4813 scope.go:117] "RemoveContainer" containerID="0ab3178c217051fe9026c77a963c194bed57ec0fb9521678f41c7c16235ca789" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.166861 4813 scope.go:117] "RemoveContainer" containerID="d0292e263e2315d5f0352fb15d9e84e89f103c0b8e3371db2a611b001c5a3fe6" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.186298 4813 scope.go:117] "RemoveContainer" containerID="7c4c4032f6080041e0b54686cb2c9981d2578e7a2bd02bcc1cf008c8fa3bfb6d" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.199911 4813 scope.go:117] "RemoveContainer" containerID="7324d51c21107fadbd2f170e16f3cc20fc473ca9b7b1bbe0fc5e64378bd6ab7f" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.217085 4813 scope.go:117] "RemoveContainer" containerID="6554bcb1ce7e97de39f99556fc4e3db63a583ea45bd87706a3c7737a8bde4f5b" Nov 25 10:43:57 crc kubenswrapper[4813]: I1125 10:43:57.628973 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8460ec76-ba89-4f8f-9055-d7274ab52d11" path="/var/lib/kubelet/pods/8460ec76-ba89-4f8f-9055-d7274ab52d11/volumes" Nov 25 10:43:58 crc kubenswrapper[4813]: I1125 10:43:58.062148 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" event={"ID":"af77c760-35e1-44c7-9118-3eb2ec12d6af","Type":"ContainerStarted","Data":"8292a99def2f4d46e7472428f822be8854c90a2358bf60605f4548b9c0559d8e"} Nov 25 10:43:58 crc kubenswrapper[4813]: I1125 10:43:58.062527 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" event={"ID":"af77c760-35e1-44c7-9118-3eb2ec12d6af","Type":"ContainerStarted","Data":"5071257529f0d6a61f734f473ba201cfeba43578a3d821fc4130d0e6d358045e"} Nov 25 10:43:58 crc kubenswrapper[4813]: I1125 10:43:58.062538 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" event={"ID":"af77c760-35e1-44c7-9118-3eb2ec12d6af","Type":"ContainerStarted","Data":"c9ac6b5bd7e349e04d791b66aab0e7539dea15c86fb0ffaa1d3e5868dd53950c"} Nov 25 10:43:58 crc kubenswrapper[4813]: I1125 10:43:58.062549 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" event={"ID":"af77c760-35e1-44c7-9118-3eb2ec12d6af","Type":"ContainerStarted","Data":"60276e166bd5082fcd007e9ef42f799f065fb9f91015b25b1acfa5dca254abd1"} Nov 25 10:43:58 crc kubenswrapper[4813]: I1125 10:43:58.062559 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" event={"ID":"af77c760-35e1-44c7-9118-3eb2ec12d6af","Type":"ContainerStarted","Data":"982e844dc1020fb910acfe3b437ad475f4b4e5dd5faa245ebaca7b7f1b5dcb8f"} Nov 25 10:43:58 crc kubenswrapper[4813]: I1125 10:43:58.062566 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" event={"ID":"af77c760-35e1-44c7-9118-3eb2ec12d6af","Type":"ContainerStarted","Data":"f69735570f2105b0abb12f9b1f89b365a8491ffb0f129e54be122c16abee08a0"} Nov 25 10:44:00 crc kubenswrapper[4813]: I1125 10:44:00.079456 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" event={"ID":"af77c760-35e1-44c7-9118-3eb2ec12d6af","Type":"ContainerStarted","Data":"9cdc97a3af30b450661cf929b5af75d5e869069e5830d7c4ce35ee170b414eba"} Nov 25 10:44:04 crc kubenswrapper[4813]: I1125 10:44:04.113717 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" event={"ID":"af77c760-35e1-44c7-9118-3eb2ec12d6af","Type":"ContainerStarted","Data":"5da25c3adb38adfb99ea3299b5d35e576907f68d274b1dbf37fa57288f1d8f5f"} Nov 25 10:44:04 crc kubenswrapper[4813]: I1125 10:44:04.114386 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:44:04 crc kubenswrapper[4813]: I1125 10:44:04.114398 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:44:04 crc kubenswrapper[4813]: I1125 10:44:04.145967 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" podStartSLOduration=8.145946023 podStartE2EDuration="8.145946023s" podCreationTimestamp="2025-11-25 10:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:44:04.14057639 +0000 UTC m=+741.270286296" watchObservedRunningTime="2025-11-25 10:44:04.145946023 +0000 UTC m=+741.275655919" Nov 25 10:44:04 crc kubenswrapper[4813]: I1125 10:44:04.148394 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:44:05 crc kubenswrapper[4813]: I1125 10:44:05.120807 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:44:05 crc kubenswrapper[4813]: I1125 10:44:05.153469 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.154637 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vd4gc"] Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.155361 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" podUID="09baf5a6-68d3-4173-ba92-46e36fab8a2e" containerName="controller-manager" containerID="cri-o://bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a" gracePeriod=30 Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.268452 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl"] Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.268955 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" podUID="65730459-3e56-4cd2-97f4-4e47f60c32c6" containerName="route-controller-manager" containerID="cri-o://5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553" gracePeriod=30 Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.680548 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.814780 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.827784 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-client-ca\") pod \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.827876 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09baf5a6-68d3-4173-ba92-46e36fab8a2e-serving-cert\") pod \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.827961 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-config\") pod \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.828135 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-proxy-ca-bundles\") pod \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.828168 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsznf\" (UniqueName: \"kubernetes.io/projected/09baf5a6-68d3-4173-ba92-46e36fab8a2e-kube-api-access-hsznf\") pod \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\" (UID: \"09baf5a6-68d3-4173-ba92-46e36fab8a2e\") " Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.829444 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "09baf5a6-68d3-4173-ba92-46e36fab8a2e" (UID: "09baf5a6-68d3-4173-ba92-46e36fab8a2e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.829853 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-client-ca" (OuterVolumeSpecName: "client-ca") pod "09baf5a6-68d3-4173-ba92-46e36fab8a2e" (UID: "09baf5a6-68d3-4173-ba92-46e36fab8a2e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.830166 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-config" (OuterVolumeSpecName: "config") pod "09baf5a6-68d3-4173-ba92-46e36fab8a2e" (UID: "09baf5a6-68d3-4173-ba92-46e36fab8a2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.838486 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09baf5a6-68d3-4173-ba92-46e36fab8a2e-kube-api-access-hsznf" (OuterVolumeSpecName: "kube-api-access-hsznf") pod "09baf5a6-68d3-4173-ba92-46e36fab8a2e" (UID: "09baf5a6-68d3-4173-ba92-46e36fab8a2e"). InnerVolumeSpecName "kube-api-access-hsznf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.838766 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09baf5a6-68d3-4173-ba92-46e36fab8a2e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09baf5a6-68d3-4173-ba92-46e36fab8a2e" (UID: "09baf5a6-68d3-4173-ba92-46e36fab8a2e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.929534 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65730459-3e56-4cd2-97f4-4e47f60c32c6-config\") pod \"65730459-3e56-4cd2-97f4-4e47f60c32c6\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.929920 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqxfz\" (UniqueName: \"kubernetes.io/projected/65730459-3e56-4cd2-97f4-4e47f60c32c6-kube-api-access-nqxfz\") pod \"65730459-3e56-4cd2-97f4-4e47f60c32c6\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.930041 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65730459-3e56-4cd2-97f4-4e47f60c32c6-serving-cert\") pod \"65730459-3e56-4cd2-97f4-4e47f60c32c6\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.930202 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65730459-3e56-4cd2-97f4-4e47f60c32c6-client-ca\") pod \"65730459-3e56-4cd2-97f4-4e47f60c32c6\" (UID: \"65730459-3e56-4cd2-97f4-4e47f60c32c6\") " Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.930479 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.930537 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsznf\" (UniqueName: \"kubernetes.io/projected/09baf5a6-68d3-4173-ba92-46e36fab8a2e-kube-api-access-hsznf\") on node \"crc\" DevicePath \"\"" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.930625 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.930703 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09baf5a6-68d3-4173-ba92-46e36fab8a2e-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.930778 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09baf5a6-68d3-4173-ba92-46e36fab8a2e-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.930780 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65730459-3e56-4cd2-97f4-4e47f60c32c6-client-ca" (OuterVolumeSpecName: "client-ca") pod "65730459-3e56-4cd2-97f4-4e47f60c32c6" (UID: "65730459-3e56-4cd2-97f4-4e47f60c32c6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.930907 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65730459-3e56-4cd2-97f4-4e47f60c32c6-config" (OuterVolumeSpecName: "config") pod "65730459-3e56-4cd2-97f4-4e47f60c32c6" (UID: "65730459-3e56-4cd2-97f4-4e47f60c32c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.933359 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65730459-3e56-4cd2-97f4-4e47f60c32c6-kube-api-access-nqxfz" (OuterVolumeSpecName: "kube-api-access-nqxfz") pod "65730459-3e56-4cd2-97f4-4e47f60c32c6" (UID: "65730459-3e56-4cd2-97f4-4e47f60c32c6"). InnerVolumeSpecName "kube-api-access-nqxfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:44:18 crc kubenswrapper[4813]: I1125 10:44:18.933356 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65730459-3e56-4cd2-97f4-4e47f60c32c6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "65730459-3e56-4cd2-97f4-4e47f60c32c6" (UID: "65730459-3e56-4cd2-97f4-4e47f60c32c6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.032500 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65730459-3e56-4cd2-97f4-4e47f60c32c6-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.032571 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65730459-3e56-4cd2-97f4-4e47f60c32c6-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.032585 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqxfz\" (UniqueName: \"kubernetes.io/projected/65730459-3e56-4cd2-97f4-4e47f60c32c6-kube-api-access-nqxfz\") on node \"crc\" DevicePath \"\"" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.032599 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65730459-3e56-4cd2-97f4-4e47f60c32c6-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.193258 4813 generic.go:334] "Generic (PLEG): container finished" podID="09baf5a6-68d3-4173-ba92-46e36fab8a2e" containerID="bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a" exitCode=0 Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.193428 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" event={"ID":"09baf5a6-68d3-4173-ba92-46e36fab8a2e","Type":"ContainerDied","Data":"bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a"} Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.193480 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" event={"ID":"09baf5a6-68d3-4173-ba92-46e36fab8a2e","Type":"ContainerDied","Data":"4efe3c696982dfd928adee122138684909869908e310d502cda10e99fc8f7752"} Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.193520 4813 scope.go:117] "RemoveContainer" containerID="bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.194936 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vd4gc" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.195944 4813 generic.go:334] "Generic (PLEG): container finished" podID="65730459-3e56-4cd2-97f4-4e47f60c32c6" containerID="5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553" exitCode=0 Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.195982 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" event={"ID":"65730459-3e56-4cd2-97f4-4e47f60c32c6","Type":"ContainerDied","Data":"5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553"} Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.196009 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" event={"ID":"65730459-3e56-4cd2-97f4-4e47f60c32c6","Type":"ContainerDied","Data":"d932233fa4edbaba7df745c68809a686100ff75122667baef012452360ed8c19"} Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.196069 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.214196 4813 scope.go:117] "RemoveContainer" containerID="bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a" Nov 25 10:44:19 crc kubenswrapper[4813]: E1125 10:44:19.214835 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a\": container with ID starting with bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a not found: ID does not exist" containerID="bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.214873 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a"} err="failed to get container status \"bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a\": rpc error: code = NotFound desc = could not find container \"bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a\": container with ID starting with bf7168669f63ce46243ded00ff312f69c3d5533c5df914556ff23be16aaaf44a not found: ID does not exist" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.214898 4813 scope.go:117] "RemoveContainer" containerID="5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.235275 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vd4gc"] Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.237203 4813 scope.go:117] "RemoveContainer" containerID="5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553" Nov 25 10:44:19 crc kubenswrapper[4813]: E1125 10:44:19.237919 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553\": container with ID starting with 5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553 not found: ID does not exist" containerID="5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.237983 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553"} err="failed to get container status \"5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553\": rpc error: code = NotFound desc = could not find container \"5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553\": container with ID starting with 5ebbbe111cb9bd8bab4f2563fbf434017a30eac4d870ccfa0789fada5346f553 not found: ID does not exist" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.239232 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vd4gc"] Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.250324 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl"] Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.253711 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ckjsl"] Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.287984 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f99864fc7-xf926"] Nov 25 10:44:19 crc kubenswrapper[4813]: E1125 10:44:19.288228 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09baf5a6-68d3-4173-ba92-46e36fab8a2e" containerName="controller-manager" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.288241 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="09baf5a6-68d3-4173-ba92-46e36fab8a2e" containerName="controller-manager" Nov 25 10:44:19 crc kubenswrapper[4813]: E1125 10:44:19.288258 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65730459-3e56-4cd2-97f4-4e47f60c32c6" containerName="route-controller-manager" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.288264 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="65730459-3e56-4cd2-97f4-4e47f60c32c6" containerName="route-controller-manager" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.288357 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="65730459-3e56-4cd2-97f4-4e47f60c32c6" containerName="route-controller-manager" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.288368 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="09baf5a6-68d3-4173-ba92-46e36fab8a2e" containerName="controller-manager" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.288736 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.293284 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.293469 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.293531 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.293709 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.293955 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.294088 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.298900 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.299245 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f99864fc7-xf926"] Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.336030 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvh2q\" (UniqueName: \"kubernetes.io/projected/14afcb43-5700-4549-a41c-eee047c6ec5f-kube-api-access-dvh2q\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.336299 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14afcb43-5700-4549-a41c-eee047c6ec5f-serving-cert\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.336460 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14afcb43-5700-4549-a41c-eee047c6ec5f-client-ca\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.336523 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14afcb43-5700-4549-a41c-eee047c6ec5f-proxy-ca-bundles\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.336670 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14afcb43-5700-4549-a41c-eee047c6ec5f-config\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.438467 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvh2q\" (UniqueName: \"kubernetes.io/projected/14afcb43-5700-4549-a41c-eee047c6ec5f-kube-api-access-dvh2q\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.438925 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14afcb43-5700-4549-a41c-eee047c6ec5f-serving-cert\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.438964 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14afcb43-5700-4549-a41c-eee047c6ec5f-client-ca\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.439000 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14afcb43-5700-4549-a41c-eee047c6ec5f-proxy-ca-bundles\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.439071 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14afcb43-5700-4549-a41c-eee047c6ec5f-config\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.440155 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14afcb43-5700-4549-a41c-eee047c6ec5f-client-ca\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.440659 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14afcb43-5700-4549-a41c-eee047c6ec5f-proxy-ca-bundles\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.441546 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14afcb43-5700-4549-a41c-eee047c6ec5f-config\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.443803 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14afcb43-5700-4549-a41c-eee047c6ec5f-serving-cert\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.467663 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvh2q\" (UniqueName: \"kubernetes.io/projected/14afcb43-5700-4549-a41c-eee047c6ec5f-kube-api-access-dvh2q\") pod \"controller-manager-6f99864fc7-xf926\" (UID: \"14afcb43-5700-4549-a41c-eee047c6ec5f\") " pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.625760 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.631712 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09baf5a6-68d3-4173-ba92-46e36fab8a2e" path="/var/lib/kubelet/pods/09baf5a6-68d3-4173-ba92-46e36fab8a2e/volumes" Nov 25 10:44:19 crc kubenswrapper[4813]: I1125 10:44:19.632343 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65730459-3e56-4cd2-97f4-4e47f60c32c6" path="/var/lib/kubelet/pods/65730459-3e56-4cd2-97f4-4e47f60c32c6/volumes" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.037500 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f99864fc7-xf926"] Nov 25 10:44:20 crc kubenswrapper[4813]: W1125 10:44:20.044785 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14afcb43_5700_4549_a41c_eee047c6ec5f.slice/crio-00ad80975f9c386e3305442347a38b9952efa71a0a21ed27f241c2122feed8ba WatchSource:0}: Error finding container 00ad80975f9c386e3305442347a38b9952efa71a0a21ed27f241c2122feed8ba: Status 404 returned error can't find the container with id 00ad80975f9c386e3305442347a38b9952efa71a0a21ed27f241c2122feed8ba Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.205698 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" event={"ID":"14afcb43-5700-4549-a41c-eee047c6ec5f","Type":"ContainerStarted","Data":"00ad80975f9c386e3305442347a38b9952efa71a0a21ed27f241c2122feed8ba"} Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.281873 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88"] Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.282753 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.286315 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.286345 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.286621 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.286802 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.286879 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.286913 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.297388 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88"] Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.350588 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90beee2a-e299-44b1-bf9c-482f204e34cc-serving-cert\") pod \"route-controller-manager-85f65dbbdb-k6x88\" (UID: \"90beee2a-e299-44b1-bf9c-482f204e34cc\") " pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.350642 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90beee2a-e299-44b1-bf9c-482f204e34cc-client-ca\") pod \"route-controller-manager-85f65dbbdb-k6x88\" (UID: \"90beee2a-e299-44b1-bf9c-482f204e34cc\") " pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.350696 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90beee2a-e299-44b1-bf9c-482f204e34cc-config\") pod \"route-controller-manager-85f65dbbdb-k6x88\" (UID: \"90beee2a-e299-44b1-bf9c-482f204e34cc\") " pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.350828 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vshrv\" (UniqueName: \"kubernetes.io/projected/90beee2a-e299-44b1-bf9c-482f204e34cc-kube-api-access-vshrv\") pod \"route-controller-manager-85f65dbbdb-k6x88\" (UID: \"90beee2a-e299-44b1-bf9c-482f204e34cc\") " pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.452427 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90beee2a-e299-44b1-bf9c-482f204e34cc-serving-cert\") pod \"route-controller-manager-85f65dbbdb-k6x88\" (UID: \"90beee2a-e299-44b1-bf9c-482f204e34cc\") " pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.452754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90beee2a-e299-44b1-bf9c-482f204e34cc-client-ca\") pod \"route-controller-manager-85f65dbbdb-k6x88\" (UID: \"90beee2a-e299-44b1-bf9c-482f204e34cc\") " pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.452875 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90beee2a-e299-44b1-bf9c-482f204e34cc-config\") pod \"route-controller-manager-85f65dbbdb-k6x88\" (UID: \"90beee2a-e299-44b1-bf9c-482f204e34cc\") " pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.452998 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vshrv\" (UniqueName: \"kubernetes.io/projected/90beee2a-e299-44b1-bf9c-482f204e34cc-kube-api-access-vshrv\") pod \"route-controller-manager-85f65dbbdb-k6x88\" (UID: \"90beee2a-e299-44b1-bf9c-482f204e34cc\") " pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.453927 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90beee2a-e299-44b1-bf9c-482f204e34cc-client-ca\") pod \"route-controller-manager-85f65dbbdb-k6x88\" (UID: \"90beee2a-e299-44b1-bf9c-482f204e34cc\") " pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.454015 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90beee2a-e299-44b1-bf9c-482f204e34cc-config\") pod \"route-controller-manager-85f65dbbdb-k6x88\" (UID: \"90beee2a-e299-44b1-bf9c-482f204e34cc\") " pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.459849 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90beee2a-e299-44b1-bf9c-482f204e34cc-serving-cert\") pod \"route-controller-manager-85f65dbbdb-k6x88\" (UID: \"90beee2a-e299-44b1-bf9c-482f204e34cc\") " pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.472462 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vshrv\" (UniqueName: \"kubernetes.io/projected/90beee2a-e299-44b1-bf9c-482f204e34cc-kube-api-access-vshrv\") pod \"route-controller-manager-85f65dbbdb-k6x88\" (UID: \"90beee2a-e299-44b1-bf9c-482f204e34cc\") " pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.629040 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:20 crc kubenswrapper[4813]: I1125 10:44:20.838470 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88"] Nov 25 10:44:21 crc kubenswrapper[4813]: I1125 10:44:21.215917 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" event={"ID":"14afcb43-5700-4549-a41c-eee047c6ec5f","Type":"ContainerStarted","Data":"1f7a6036c06a2f5e9578b5b7b6fabe2b61b53aa4666ef1bea4b42539077bca57"} Nov 25 10:44:21 crc kubenswrapper[4813]: I1125 10:44:21.217774 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" event={"ID":"90beee2a-e299-44b1-bf9c-482f204e34cc","Type":"ContainerStarted","Data":"4759940fd53137bd75b228d6ee3af0bdce6d0eae111d5cd89174cb047aebfbd4"} Nov 25 10:44:21 crc kubenswrapper[4813]: I1125 10:44:21.217816 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" event={"ID":"90beee2a-e299-44b1-bf9c-482f204e34cc","Type":"ContainerStarted","Data":"b451280b9416c1ab791bea4a9df00dc08c07bca99c089aeaf80f0568003adcc5"} Nov 25 10:44:21 crc kubenswrapper[4813]: I1125 10:44:21.218053 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:21 crc kubenswrapper[4813]: I1125 10:44:21.252749 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" podStartSLOduration=3.252718818 podStartE2EDuration="3.252718818s" podCreationTimestamp="2025-11-25 10:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:44:21.251304028 +0000 UTC m=+758.381013934" watchObservedRunningTime="2025-11-25 10:44:21.252718818 +0000 UTC m=+758.382428704" Nov 25 10:44:21 crc kubenswrapper[4813]: I1125 10:44:21.254656 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" podStartSLOduration=3.2546450829999998 podStartE2EDuration="3.254645083s" podCreationTimestamp="2025-11-25 10:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:44:21.235798666 +0000 UTC m=+758.365508582" watchObservedRunningTime="2025-11-25 10:44:21.254645083 +0000 UTC m=+758.384354969" Nov 25 10:44:21 crc kubenswrapper[4813]: I1125 10:44:21.504369 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85f65dbbdb-k6x88" Nov 25 10:44:22 crc kubenswrapper[4813]: I1125 10:44:22.223259 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:22 crc kubenswrapper[4813]: I1125 10:44:22.228597 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f99864fc7-xf926" Nov 25 10:44:24 crc kubenswrapper[4813]: I1125 10:44:24.109432 4813 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 10:44:26 crc kubenswrapper[4813]: I1125 10:44:26.449119 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ql2rj" Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.554996 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs"] Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.556772 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.559614 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.575300 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs"] Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.683935 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b35384e6-d6f1-4613-b61a-5e324239b7eb-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs\" (UID: \"b35384e6-d6f1-4613-b61a-5e324239b7eb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.683985 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b35384e6-d6f1-4613-b61a-5e324239b7eb-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs\" (UID: \"b35384e6-d6f1-4613-b61a-5e324239b7eb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.684058 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7g9h\" (UniqueName: \"kubernetes.io/projected/b35384e6-d6f1-4613-b61a-5e324239b7eb-kube-api-access-f7g9h\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs\" (UID: \"b35384e6-d6f1-4613-b61a-5e324239b7eb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.785426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7g9h\" (UniqueName: \"kubernetes.io/projected/b35384e6-d6f1-4613-b61a-5e324239b7eb-kube-api-access-f7g9h\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs\" (UID: \"b35384e6-d6f1-4613-b61a-5e324239b7eb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.785539 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b35384e6-d6f1-4613-b61a-5e324239b7eb-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs\" (UID: \"b35384e6-d6f1-4613-b61a-5e324239b7eb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.785584 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b35384e6-d6f1-4613-b61a-5e324239b7eb-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs\" (UID: \"b35384e6-d6f1-4613-b61a-5e324239b7eb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.786539 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b35384e6-d6f1-4613-b61a-5e324239b7eb-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs\" (UID: \"b35384e6-d6f1-4613-b61a-5e324239b7eb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.786619 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b35384e6-d6f1-4613-b61a-5e324239b7eb-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs\" (UID: \"b35384e6-d6f1-4613-b61a-5e324239b7eb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.811647 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7g9h\" (UniqueName: \"kubernetes.io/projected/b35384e6-d6f1-4613-b61a-5e324239b7eb-kube-api-access-f7g9h\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs\" (UID: \"b35384e6-d6f1-4613-b61a-5e324239b7eb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:37 crc kubenswrapper[4813]: I1125 10:44:37.872927 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:38 crc kubenswrapper[4813]: I1125 10:44:38.265858 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs"] Nov 25 10:44:38 crc kubenswrapper[4813]: W1125 10:44:38.271616 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb35384e6_d6f1_4613_b61a_5e324239b7eb.slice/crio-0d2287e8ddc0e887a3bb4136a8c7cf8b3abffde77007083b05d833e0a1981943 WatchSource:0}: Error finding container 0d2287e8ddc0e887a3bb4136a8c7cf8b3abffde77007083b05d833e0a1981943: Status 404 returned error can't find the container with id 0d2287e8ddc0e887a3bb4136a8c7cf8b3abffde77007083b05d833e0a1981943 Nov 25 10:44:38 crc kubenswrapper[4813]: I1125 10:44:38.310900 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" event={"ID":"b35384e6-d6f1-4613-b61a-5e324239b7eb","Type":"ContainerStarted","Data":"0d2287e8ddc0e887a3bb4136a8c7cf8b3abffde77007083b05d833e0a1981943"} Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.147542 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ggk26"] Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.149437 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.161171 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggk26"] Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.203163 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e06c80-525d-4ded-90b5-cb4823ae8512-catalog-content\") pod \"redhat-operators-ggk26\" (UID: \"c2e06c80-525d-4ded-90b5-cb4823ae8512\") " pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.203237 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njttj\" (UniqueName: \"kubernetes.io/projected/c2e06c80-525d-4ded-90b5-cb4823ae8512-kube-api-access-njttj\") pod \"redhat-operators-ggk26\" (UID: \"c2e06c80-525d-4ded-90b5-cb4823ae8512\") " pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.203268 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e06c80-525d-4ded-90b5-cb4823ae8512-utilities\") pod \"redhat-operators-ggk26\" (UID: \"c2e06c80-525d-4ded-90b5-cb4823ae8512\") " pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.304655 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e06c80-525d-4ded-90b5-cb4823ae8512-catalog-content\") pod \"redhat-operators-ggk26\" (UID: \"c2e06c80-525d-4ded-90b5-cb4823ae8512\") " pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.304767 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njttj\" (UniqueName: \"kubernetes.io/projected/c2e06c80-525d-4ded-90b5-cb4823ae8512-kube-api-access-njttj\") pod \"redhat-operators-ggk26\" (UID: \"c2e06c80-525d-4ded-90b5-cb4823ae8512\") " pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.304799 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e06c80-525d-4ded-90b5-cb4823ae8512-utilities\") pod \"redhat-operators-ggk26\" (UID: \"c2e06c80-525d-4ded-90b5-cb4823ae8512\") " pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.305248 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e06c80-525d-4ded-90b5-cb4823ae8512-catalog-content\") pod \"redhat-operators-ggk26\" (UID: \"c2e06c80-525d-4ded-90b5-cb4823ae8512\") " pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.305267 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e06c80-525d-4ded-90b5-cb4823ae8512-utilities\") pod \"redhat-operators-ggk26\" (UID: \"c2e06c80-525d-4ded-90b5-cb4823ae8512\") " pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.317089 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" event={"ID":"b35384e6-d6f1-4613-b61a-5e324239b7eb","Type":"ContainerStarted","Data":"09d2cca1131d43533f54fdf4d6a60c8447ce8f003709cc18da5988d5b07522a8"} Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.327966 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njttj\" (UniqueName: \"kubernetes.io/projected/c2e06c80-525d-4ded-90b5-cb4823ae8512-kube-api-access-njttj\") pod \"redhat-operators-ggk26\" (UID: \"c2e06c80-525d-4ded-90b5-cb4823ae8512\") " pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.474912 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:39 crc kubenswrapper[4813]: I1125 10:44:39.872638 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggk26"] Nov 25 10:44:40 crc kubenswrapper[4813]: I1125 10:44:40.322952 4813 generic.go:334] "Generic (PLEG): container finished" podID="b35384e6-d6f1-4613-b61a-5e324239b7eb" containerID="09d2cca1131d43533f54fdf4d6a60c8447ce8f003709cc18da5988d5b07522a8" exitCode=0 Nov 25 10:44:40 crc kubenswrapper[4813]: I1125 10:44:40.323037 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" event={"ID":"b35384e6-d6f1-4613-b61a-5e324239b7eb","Type":"ContainerDied","Data":"09d2cca1131d43533f54fdf4d6a60c8447ce8f003709cc18da5988d5b07522a8"} Nov 25 10:44:40 crc kubenswrapper[4813]: I1125 10:44:40.324721 4813 generic.go:334] "Generic (PLEG): container finished" podID="c2e06c80-525d-4ded-90b5-cb4823ae8512" containerID="fb223e8e9b7c7018e96f1994fe2cfd1ed6185f2f285c57c869ac1e0602643fff" exitCode=0 Nov 25 10:44:40 crc kubenswrapper[4813]: I1125 10:44:40.324747 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggk26" event={"ID":"c2e06c80-525d-4ded-90b5-cb4823ae8512","Type":"ContainerDied","Data":"fb223e8e9b7c7018e96f1994fe2cfd1ed6185f2f285c57c869ac1e0602643fff"} Nov 25 10:44:40 crc kubenswrapper[4813]: I1125 10:44:40.324765 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggk26" event={"ID":"c2e06c80-525d-4ded-90b5-cb4823ae8512","Type":"ContainerStarted","Data":"70abf85947709342a847c811bcd124eb8e9537049dcf254b204da20e08d2865c"} Nov 25 10:44:44 crc kubenswrapper[4813]: I1125 10:44:44.349589 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggk26" event={"ID":"c2e06c80-525d-4ded-90b5-cb4823ae8512","Type":"ContainerDied","Data":"f1fe2d15c647e5005e1b9852c68cc0498a4f6292e703db8d12101923d35b7de6"} Nov 25 10:44:44 crc kubenswrapper[4813]: I1125 10:44:44.349303 4813 generic.go:334] "Generic (PLEG): container finished" podID="c2e06c80-525d-4ded-90b5-cb4823ae8512" containerID="f1fe2d15c647e5005e1b9852c68cc0498a4f6292e703db8d12101923d35b7de6" exitCode=0 Nov 25 10:44:48 crc kubenswrapper[4813]: I1125 10:44:48.376478 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" event={"ID":"b35384e6-d6f1-4613-b61a-5e324239b7eb","Type":"ContainerStarted","Data":"51d9b48d2c365cbf56cac6d0da87bedb5687a2d96a2d26e435cf51565f14cc3d"} Nov 25 10:44:49 crc kubenswrapper[4813]: I1125 10:44:49.384535 4813 generic.go:334] "Generic (PLEG): container finished" podID="b35384e6-d6f1-4613-b61a-5e324239b7eb" containerID="51d9b48d2c365cbf56cac6d0da87bedb5687a2d96a2d26e435cf51565f14cc3d" exitCode=0 Nov 25 10:44:49 crc kubenswrapper[4813]: I1125 10:44:49.384585 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" event={"ID":"b35384e6-d6f1-4613-b61a-5e324239b7eb","Type":"ContainerDied","Data":"51d9b48d2c365cbf56cac6d0da87bedb5687a2d96a2d26e435cf51565f14cc3d"} Nov 25 10:44:54 crc kubenswrapper[4813]: I1125 10:44:54.411768 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggk26" event={"ID":"c2e06c80-525d-4ded-90b5-cb4823ae8512","Type":"ContainerStarted","Data":"0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea"} Nov 25 10:44:54 crc kubenswrapper[4813]: I1125 10:44:54.413645 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" event={"ID":"b35384e6-d6f1-4613-b61a-5e324239b7eb","Type":"ContainerStarted","Data":"0ceaf724ab74507a39c3e2d69135795dc9b398b90085550b624bc4dad367de39"} Nov 25 10:44:55 crc kubenswrapper[4813]: I1125 10:44:55.421097 4813 generic.go:334] "Generic (PLEG): container finished" podID="b35384e6-d6f1-4613-b61a-5e324239b7eb" containerID="0ceaf724ab74507a39c3e2d69135795dc9b398b90085550b624bc4dad367de39" exitCode=0 Nov 25 10:44:55 crc kubenswrapper[4813]: I1125 10:44:55.421185 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" event={"ID":"b35384e6-d6f1-4613-b61a-5e324239b7eb","Type":"ContainerDied","Data":"0ceaf724ab74507a39c3e2d69135795dc9b398b90085550b624bc4dad367de39"} Nov 25 10:44:55 crc kubenswrapper[4813]: I1125 10:44:55.439840 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ggk26" podStartSLOduration=4.984476593 podStartE2EDuration="16.439823176s" podCreationTimestamp="2025-11-25 10:44:39 +0000 UTC" firstStartedPulling="2025-11-25 10:44:40.325872246 +0000 UTC m=+777.455582132" lastFinishedPulling="2025-11-25 10:44:51.781218789 +0000 UTC m=+788.910928715" observedRunningTime="2025-11-25 10:44:55.437601283 +0000 UTC m=+792.567311189" watchObservedRunningTime="2025-11-25 10:44:55.439823176 +0000 UTC m=+792.569533072" Nov 25 10:44:56 crc kubenswrapper[4813]: I1125 10:44:56.721599 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:56 crc kubenswrapper[4813]: I1125 10:44:56.741462 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b35384e6-d6f1-4613-b61a-5e324239b7eb-util\") pod \"b35384e6-d6f1-4613-b61a-5e324239b7eb\" (UID: \"b35384e6-d6f1-4613-b61a-5e324239b7eb\") " Nov 25 10:44:56 crc kubenswrapper[4813]: I1125 10:44:56.741583 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b35384e6-d6f1-4613-b61a-5e324239b7eb-bundle\") pod \"b35384e6-d6f1-4613-b61a-5e324239b7eb\" (UID: \"b35384e6-d6f1-4613-b61a-5e324239b7eb\") " Nov 25 10:44:56 crc kubenswrapper[4813]: I1125 10:44:56.741635 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7g9h\" (UniqueName: \"kubernetes.io/projected/b35384e6-d6f1-4613-b61a-5e324239b7eb-kube-api-access-f7g9h\") pod \"b35384e6-d6f1-4613-b61a-5e324239b7eb\" (UID: \"b35384e6-d6f1-4613-b61a-5e324239b7eb\") " Nov 25 10:44:56 crc kubenswrapper[4813]: I1125 10:44:56.743225 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b35384e6-d6f1-4613-b61a-5e324239b7eb-bundle" (OuterVolumeSpecName: "bundle") pod "b35384e6-d6f1-4613-b61a-5e324239b7eb" (UID: "b35384e6-d6f1-4613-b61a-5e324239b7eb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:44:56 crc kubenswrapper[4813]: I1125 10:44:56.750320 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b35384e6-d6f1-4613-b61a-5e324239b7eb-kube-api-access-f7g9h" (OuterVolumeSpecName: "kube-api-access-f7g9h") pod "b35384e6-d6f1-4613-b61a-5e324239b7eb" (UID: "b35384e6-d6f1-4613-b61a-5e324239b7eb"). InnerVolumeSpecName "kube-api-access-f7g9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:44:56 crc kubenswrapper[4813]: I1125 10:44:56.756784 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b35384e6-d6f1-4613-b61a-5e324239b7eb-util" (OuterVolumeSpecName: "util") pod "b35384e6-d6f1-4613-b61a-5e324239b7eb" (UID: "b35384e6-d6f1-4613-b61a-5e324239b7eb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:44:56 crc kubenswrapper[4813]: I1125 10:44:56.843216 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b35384e6-d6f1-4613-b61a-5e324239b7eb-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:44:56 crc kubenswrapper[4813]: I1125 10:44:56.843260 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7g9h\" (UniqueName: \"kubernetes.io/projected/b35384e6-d6f1-4613-b61a-5e324239b7eb-kube-api-access-f7g9h\") on node \"crc\" DevicePath \"\"" Nov 25 10:44:56 crc kubenswrapper[4813]: I1125 10:44:56.843272 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b35384e6-d6f1-4613-b61a-5e324239b7eb-util\") on node \"crc\" DevicePath \"\"" Nov 25 10:44:57 crc kubenswrapper[4813]: I1125 10:44:57.435569 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" event={"ID":"b35384e6-d6f1-4613-b61a-5e324239b7eb","Type":"ContainerDied","Data":"0d2287e8ddc0e887a3bb4136a8c7cf8b3abffde77007083b05d833e0a1981943"} Nov 25 10:44:57 crc kubenswrapper[4813]: I1125 10:44:57.435614 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d2287e8ddc0e887a3bb4136a8c7cf8b3abffde77007083b05d833e0a1981943" Nov 25 10:44:57 crc kubenswrapper[4813]: I1125 10:44:57.435691 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e2vxxs" Nov 25 10:44:59 crc kubenswrapper[4813]: I1125 10:44:59.476026 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:59 crc kubenswrapper[4813]: I1125 10:44:59.477244 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:44:59 crc kubenswrapper[4813]: I1125 10:44:59.534643 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.140820 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs"] Nov 25 10:45:00 crc kubenswrapper[4813]: E1125 10:45:00.141066 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35384e6-d6f1-4613-b61a-5e324239b7eb" containerName="pull" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.141081 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35384e6-d6f1-4613-b61a-5e324239b7eb" containerName="pull" Nov 25 10:45:00 crc kubenswrapper[4813]: E1125 10:45:00.141098 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35384e6-d6f1-4613-b61a-5e324239b7eb" containerName="extract" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.141104 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35384e6-d6f1-4613-b61a-5e324239b7eb" containerName="extract" Nov 25 10:45:00 crc kubenswrapper[4813]: E1125 10:45:00.141118 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35384e6-d6f1-4613-b61a-5e324239b7eb" containerName="util" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.141125 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35384e6-d6f1-4613-b61a-5e324239b7eb" containerName="util" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.141241 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b35384e6-d6f1-4613-b61a-5e324239b7eb" containerName="extract" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.141663 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.144755 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.148004 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.150631 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs"] Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.183644 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-config-volume\") pod \"collect-profiles-29401125-hq7gs\" (UID: \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.183794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-secret-volume\") pod \"collect-profiles-29401125-hq7gs\" (UID: \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.183830 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7ghl\" (UniqueName: \"kubernetes.io/projected/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-kube-api-access-z7ghl\") pod \"collect-profiles-29401125-hq7gs\" (UID: \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.284839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-secret-volume\") pod \"collect-profiles-29401125-hq7gs\" (UID: \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.285937 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7ghl\" (UniqueName: \"kubernetes.io/projected/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-kube-api-access-z7ghl\") pod \"collect-profiles-29401125-hq7gs\" (UID: \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.286168 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-config-volume\") pod \"collect-profiles-29401125-hq7gs\" (UID: \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.287165 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-config-volume\") pod \"collect-profiles-29401125-hq7gs\" (UID: \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.298830 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-secret-volume\") pod \"collect-profiles-29401125-hq7gs\" (UID: \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.304208 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7ghl\" (UniqueName: \"kubernetes.io/projected/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-kube-api-access-z7ghl\") pod \"collect-profiles-29401125-hq7gs\" (UID: \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.466479 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.502379 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.557549 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggk26"] Nov 25 10:45:00 crc kubenswrapper[4813]: I1125 10:45:00.701571 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs"] Nov 25 10:45:01 crc kubenswrapper[4813]: I1125 10:45:01.460929 4813 generic.go:334] "Generic (PLEG): container finished" podID="3432c0e3-bc13-4ed2-a710-d89cfe27cea2" containerID="137e38318037d0f66d5a99741d509d6b6c00a184c98633a23a82acd248f3a68a" exitCode=0 Nov 25 10:45:01 crc kubenswrapper[4813]: I1125 10:45:01.461001 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" event={"ID":"3432c0e3-bc13-4ed2-a710-d89cfe27cea2","Type":"ContainerDied","Data":"137e38318037d0f66d5a99741d509d6b6c00a184c98633a23a82acd248f3a68a"} Nov 25 10:45:01 crc kubenswrapper[4813]: I1125 10:45:01.461571 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" event={"ID":"3432c0e3-bc13-4ed2-a710-d89cfe27cea2","Type":"ContainerStarted","Data":"d0e4a7c63ef37fbc3bacc752fe2c883dd9fc409bc49828e92a2af7e298c46b18"} Nov 25 10:45:02 crc kubenswrapper[4813]: I1125 10:45:02.466963 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ggk26" podUID="c2e06c80-525d-4ded-90b5-cb4823ae8512" containerName="registry-server" containerID="cri-o://0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea" gracePeriod=2 Nov 25 10:45:02 crc kubenswrapper[4813]: I1125 10:45:02.762497 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:02 crc kubenswrapper[4813]: I1125 10:45:02.821621 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-config-volume\") pod \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\" (UID: \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\") " Nov 25 10:45:02 crc kubenswrapper[4813]: I1125 10:45:02.821782 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7ghl\" (UniqueName: \"kubernetes.io/projected/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-kube-api-access-z7ghl\") pod \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\" (UID: \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\") " Nov 25 10:45:02 crc kubenswrapper[4813]: I1125 10:45:02.821928 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-secret-volume\") pod \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\" (UID: \"3432c0e3-bc13-4ed2-a710-d89cfe27cea2\") " Nov 25 10:45:02 crc kubenswrapper[4813]: I1125 10:45:02.822583 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-config-volume" (OuterVolumeSpecName: "config-volume") pod "3432c0e3-bc13-4ed2-a710-d89cfe27cea2" (UID: "3432c0e3-bc13-4ed2-a710-d89cfe27cea2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:45:02 crc kubenswrapper[4813]: I1125 10:45:02.823095 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 10:45:02 crc kubenswrapper[4813]: I1125 10:45:02.827933 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-kube-api-access-z7ghl" (OuterVolumeSpecName: "kube-api-access-z7ghl") pod "3432c0e3-bc13-4ed2-a710-d89cfe27cea2" (UID: "3432c0e3-bc13-4ed2-a710-d89cfe27cea2"). InnerVolumeSpecName "kube-api-access-z7ghl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:45:02 crc kubenswrapper[4813]: I1125 10:45:02.828912 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3432c0e3-bc13-4ed2-a710-d89cfe27cea2" (UID: "3432c0e3-bc13-4ed2-a710-d89cfe27cea2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:45:02 crc kubenswrapper[4813]: I1125 10:45:02.916784 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:45:02 crc kubenswrapper[4813]: I1125 10:45:02.924497 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 10:45:02 crc kubenswrapper[4813]: I1125 10:45:02.924558 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7ghl\" (UniqueName: \"kubernetes.io/projected/3432c0e3-bc13-4ed2-a710-d89cfe27cea2-kube-api-access-z7ghl\") on node \"crc\" DevicePath \"\"" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.025349 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njttj\" (UniqueName: \"kubernetes.io/projected/c2e06c80-525d-4ded-90b5-cb4823ae8512-kube-api-access-njttj\") pod \"c2e06c80-525d-4ded-90b5-cb4823ae8512\" (UID: \"c2e06c80-525d-4ded-90b5-cb4823ae8512\") " Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.025483 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e06c80-525d-4ded-90b5-cb4823ae8512-catalog-content\") pod \"c2e06c80-525d-4ded-90b5-cb4823ae8512\" (UID: \"c2e06c80-525d-4ded-90b5-cb4823ae8512\") " Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.025565 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e06c80-525d-4ded-90b5-cb4823ae8512-utilities\") pod \"c2e06c80-525d-4ded-90b5-cb4823ae8512\" (UID: \"c2e06c80-525d-4ded-90b5-cb4823ae8512\") " Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.026576 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2e06c80-525d-4ded-90b5-cb4823ae8512-utilities" (OuterVolumeSpecName: "utilities") pod "c2e06c80-525d-4ded-90b5-cb4823ae8512" (UID: "c2e06c80-525d-4ded-90b5-cb4823ae8512"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.029524 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e06c80-525d-4ded-90b5-cb4823ae8512-kube-api-access-njttj" (OuterVolumeSpecName: "kube-api-access-njttj") pod "c2e06c80-525d-4ded-90b5-cb4823ae8512" (UID: "c2e06c80-525d-4ded-90b5-cb4823ae8512"). InnerVolumeSpecName "kube-api-access-njttj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.119382 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2e06c80-525d-4ded-90b5-cb4823ae8512-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2e06c80-525d-4ded-90b5-cb4823ae8512" (UID: "c2e06c80-525d-4ded-90b5-cb4823ae8512"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.127789 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2e06c80-525d-4ded-90b5-cb4823ae8512-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.127834 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njttj\" (UniqueName: \"kubernetes.io/projected/c2e06c80-525d-4ded-90b5-cb4823ae8512-kube-api-access-njttj\") on node \"crc\" DevicePath \"\"" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.127846 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2e06c80-525d-4ded-90b5-cb4823ae8512-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.474604 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" event={"ID":"3432c0e3-bc13-4ed2-a710-d89cfe27cea2","Type":"ContainerDied","Data":"d0e4a7c63ef37fbc3bacc752fe2c883dd9fc409bc49828e92a2af7e298c46b18"} Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.474655 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0e4a7c63ef37fbc3bacc752fe2c883dd9fc409bc49828e92a2af7e298c46b18" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.474745 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401125-hq7gs" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.484294 4813 generic.go:334] "Generic (PLEG): container finished" podID="c2e06c80-525d-4ded-90b5-cb4823ae8512" containerID="0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea" exitCode=0 Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.484342 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggk26" event={"ID":"c2e06c80-525d-4ded-90b5-cb4823ae8512","Type":"ContainerDied","Data":"0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea"} Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.484372 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggk26" event={"ID":"c2e06c80-525d-4ded-90b5-cb4823ae8512","Type":"ContainerDied","Data":"70abf85947709342a847c811bcd124eb8e9537049dcf254b204da20e08d2865c"} Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.484393 4813 scope.go:117] "RemoveContainer" containerID="0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.484540 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggk26" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.511019 4813 scope.go:117] "RemoveContainer" containerID="f1fe2d15c647e5005e1b9852c68cc0498a4f6292e703db8d12101923d35b7de6" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.519344 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggk26"] Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.526570 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ggk26"] Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.548292 4813 scope.go:117] "RemoveContainer" containerID="fb223e8e9b7c7018e96f1994fe2cfd1ed6185f2f285c57c869ac1e0602643fff" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.572291 4813 scope.go:117] "RemoveContainer" containerID="0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea" Nov 25 10:45:03 crc kubenswrapper[4813]: E1125 10:45:03.572961 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea\": container with ID starting with 0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea not found: ID does not exist" containerID="0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.573110 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea"} err="failed to get container status \"0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea\": rpc error: code = NotFound desc = could not find container \"0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea\": container with ID starting with 0e58b9be38d2814b5e806e854f989c0dbb7ebb9b820ed7c5a090c50792df02ea not found: ID does not exist" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.573149 4813 scope.go:117] "RemoveContainer" containerID="f1fe2d15c647e5005e1b9852c68cc0498a4f6292e703db8d12101923d35b7de6" Nov 25 10:45:03 crc kubenswrapper[4813]: E1125 10:45:03.573631 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1fe2d15c647e5005e1b9852c68cc0498a4f6292e703db8d12101923d35b7de6\": container with ID starting with f1fe2d15c647e5005e1b9852c68cc0498a4f6292e703db8d12101923d35b7de6 not found: ID does not exist" containerID="f1fe2d15c647e5005e1b9852c68cc0498a4f6292e703db8d12101923d35b7de6" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.573663 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1fe2d15c647e5005e1b9852c68cc0498a4f6292e703db8d12101923d35b7de6"} err="failed to get container status \"f1fe2d15c647e5005e1b9852c68cc0498a4f6292e703db8d12101923d35b7de6\": rpc error: code = NotFound desc = could not find container \"f1fe2d15c647e5005e1b9852c68cc0498a4f6292e703db8d12101923d35b7de6\": container with ID starting with f1fe2d15c647e5005e1b9852c68cc0498a4f6292e703db8d12101923d35b7de6 not found: ID does not exist" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.573721 4813 scope.go:117] "RemoveContainer" containerID="fb223e8e9b7c7018e96f1994fe2cfd1ed6185f2f285c57c869ac1e0602643fff" Nov 25 10:45:03 crc kubenswrapper[4813]: E1125 10:45:03.576376 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb223e8e9b7c7018e96f1994fe2cfd1ed6185f2f285c57c869ac1e0602643fff\": container with ID starting with fb223e8e9b7c7018e96f1994fe2cfd1ed6185f2f285c57c869ac1e0602643fff not found: ID does not exist" containerID="fb223e8e9b7c7018e96f1994fe2cfd1ed6185f2f285c57c869ac1e0602643fff" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.576419 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb223e8e9b7c7018e96f1994fe2cfd1ed6185f2f285c57c869ac1e0602643fff"} err="failed to get container status \"fb223e8e9b7c7018e96f1994fe2cfd1ed6185f2f285c57c869ac1e0602643fff\": rpc error: code = NotFound desc = could not find container \"fb223e8e9b7c7018e96f1994fe2cfd1ed6185f2f285c57c869ac1e0602643fff\": container with ID starting with fb223e8e9b7c7018e96f1994fe2cfd1ed6185f2f285c57c869ac1e0602643fff not found: ID does not exist" Nov 25 10:45:03 crc kubenswrapper[4813]: I1125 10:45:03.637314 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2e06c80-525d-4ded-90b5-cb4823ae8512" path="/var/lib/kubelet/pods/c2e06c80-525d-4ded-90b5-cb4823ae8512/volumes" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.267879 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-nnkm5"] Nov 25 10:45:04 crc kubenswrapper[4813]: E1125 10:45:04.268821 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3432c0e3-bc13-4ed2-a710-d89cfe27cea2" containerName="collect-profiles" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.268858 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3432c0e3-bc13-4ed2-a710-d89cfe27cea2" containerName="collect-profiles" Nov 25 10:45:04 crc kubenswrapper[4813]: E1125 10:45:04.268880 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e06c80-525d-4ded-90b5-cb4823ae8512" containerName="registry-server" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.268890 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e06c80-525d-4ded-90b5-cb4823ae8512" containerName="registry-server" Nov 25 10:45:04 crc kubenswrapper[4813]: E1125 10:45:04.268915 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e06c80-525d-4ded-90b5-cb4823ae8512" containerName="extract-utilities" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.268930 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e06c80-525d-4ded-90b5-cb4823ae8512" containerName="extract-utilities" Nov 25 10:45:04 crc kubenswrapper[4813]: E1125 10:45:04.268946 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e06c80-525d-4ded-90b5-cb4823ae8512" containerName="extract-content" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.268955 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e06c80-525d-4ded-90b5-cb4823ae8512" containerName="extract-content" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.269093 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3432c0e3-bc13-4ed2-a710-d89cfe27cea2" containerName="collect-profiles" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.269122 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e06c80-525d-4ded-90b5-cb4823ae8512" containerName="registry-server" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.269926 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-nnkm5" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.271953 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.272243 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tsnsm" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.282855 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-nnkm5"] Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.283231 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.342566 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndpmw\" (UniqueName: \"kubernetes.io/projected/55e83365-d0ce-4274-a5f5-ee89147342bf-kube-api-access-ndpmw\") pod \"nmstate-operator-557fdffb88-nnkm5\" (UID: \"55e83365-d0ce-4274-a5f5-ee89147342bf\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-nnkm5" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.443882 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndpmw\" (UniqueName: \"kubernetes.io/projected/55e83365-d0ce-4274-a5f5-ee89147342bf-kube-api-access-ndpmw\") pod \"nmstate-operator-557fdffb88-nnkm5\" (UID: \"55e83365-d0ce-4274-a5f5-ee89147342bf\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-nnkm5" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.468711 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndpmw\" (UniqueName: \"kubernetes.io/projected/55e83365-d0ce-4274-a5f5-ee89147342bf-kube-api-access-ndpmw\") pod \"nmstate-operator-557fdffb88-nnkm5\" (UID: \"55e83365-d0ce-4274-a5f5-ee89147342bf\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-nnkm5" Nov 25 10:45:04 crc kubenswrapper[4813]: I1125 10:45:04.592214 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-nnkm5" Nov 25 10:45:05 crc kubenswrapper[4813]: I1125 10:45:05.023348 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-nnkm5"] Nov 25 10:45:05 crc kubenswrapper[4813]: W1125 10:45:05.030339 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55e83365_d0ce_4274_a5f5_ee89147342bf.slice/crio-a69a493d3a340beec5cb8637069842ed0e42056b1a8c54b61a8127f274d60438 WatchSource:0}: Error finding container a69a493d3a340beec5cb8637069842ed0e42056b1a8c54b61a8127f274d60438: Status 404 returned error can't find the container with id a69a493d3a340beec5cb8637069842ed0e42056b1a8c54b61a8127f274d60438 Nov 25 10:45:05 crc kubenswrapper[4813]: I1125 10:45:05.506137 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-nnkm5" event={"ID":"55e83365-d0ce-4274-a5f5-ee89147342bf","Type":"ContainerStarted","Data":"a69a493d3a340beec5cb8637069842ed0e42056b1a8c54b61a8127f274d60438"} Nov 25 10:45:08 crc kubenswrapper[4813]: I1125 10:45:08.526564 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-nnkm5" event={"ID":"55e83365-d0ce-4274-a5f5-ee89147342bf","Type":"ContainerStarted","Data":"03ad33b61935a6099704b215069fabfbdfd467e5d6d8e93645e0749f3dbcaa35"} Nov 25 10:45:08 crc kubenswrapper[4813]: I1125 10:45:08.545778 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-nnkm5" podStartSLOduration=1.6911223770000001 podStartE2EDuration="4.545757767s" podCreationTimestamp="2025-11-25 10:45:04 +0000 UTC" firstStartedPulling="2025-11-25 10:45:05.034522621 +0000 UTC m=+802.164232507" lastFinishedPulling="2025-11-25 10:45:07.889158011 +0000 UTC m=+805.018867897" observedRunningTime="2025-11-25 10:45:08.543459792 +0000 UTC m=+805.673169688" watchObservedRunningTime="2025-11-25 10:45:08.545757767 +0000 UTC m=+805.675467653" Nov 25 10:45:21 crc kubenswrapper[4813]: I1125 10:45:21.967295 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:45:21 crc kubenswrapper[4813]: I1125 10:45:21.967741 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.724748 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-v5l6z"] Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.726832 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-v5l6z" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.729246 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-q84kt" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.740190 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-v5l6z"] Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.746172 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv"] Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.747341 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.798049 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.804279 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv"] Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.811495 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8b4xz"] Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.812654 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.899871 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qqgx\" (UniqueName: \"kubernetes.io/projected/4c2eda27-6d33-43b0-847a-7da2f657251e-kube-api-access-4qqgx\") pod \"nmstate-metrics-5dcf9c57c5-v5l6z\" (UID: \"4c2eda27-6d33-43b0-847a-7da2f657251e\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-v5l6z" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.900226 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/98566240-794d-4769-8ca4-7e92f2e158cf-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-92nqv\" (UID: \"98566240-794d-4769-8ca4-7e92f2e158cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.900604 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7lsm\" (UniqueName: \"kubernetes.io/projected/98566240-794d-4769-8ca4-7e92f2e158cf-kube-api-access-b7lsm\") pod \"nmstate-webhook-6b89b748d8-92nqv\" (UID: \"98566240-794d-4769-8ca4-7e92f2e158cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.917169 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd"] Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.918007 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.922567 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.922892 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.922921 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-lkjmd" Nov 25 10:45:30 crc kubenswrapper[4813]: I1125 10:45:30.934998 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd"] Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.002414 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qqgx\" (UniqueName: \"kubernetes.io/projected/4c2eda27-6d33-43b0-847a-7da2f657251e-kube-api-access-4qqgx\") pod \"nmstate-metrics-5dcf9c57c5-v5l6z\" (UID: \"4c2eda27-6d33-43b0-847a-7da2f657251e\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-v5l6z" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.002463 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trwvl\" (UniqueName: \"kubernetes.io/projected/63cd6170-52aa-4bfb-8376-0b7a8da3f64e-kube-api-access-trwvl\") pod \"nmstate-handler-8b4xz\" (UID: \"63cd6170-52aa-4bfb-8376-0b7a8da3f64e\") " pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.002496 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/98566240-794d-4769-8ca4-7e92f2e158cf-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-92nqv\" (UID: \"98566240-794d-4769-8ca4-7e92f2e158cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.002520 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/63cd6170-52aa-4bfb-8376-0b7a8da3f64e-dbus-socket\") pod \"nmstate-handler-8b4xz\" (UID: \"63cd6170-52aa-4bfb-8376-0b7a8da3f64e\") " pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.002552 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/63cd6170-52aa-4bfb-8376-0b7a8da3f64e-nmstate-lock\") pod \"nmstate-handler-8b4xz\" (UID: \"63cd6170-52aa-4bfb-8376-0b7a8da3f64e\") " pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.002583 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7lsm\" (UniqueName: \"kubernetes.io/projected/98566240-794d-4769-8ca4-7e92f2e158cf-kube-api-access-b7lsm\") pod \"nmstate-webhook-6b89b748d8-92nqv\" (UID: \"98566240-794d-4769-8ca4-7e92f2e158cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.002604 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/63cd6170-52aa-4bfb-8376-0b7a8da3f64e-ovs-socket\") pod \"nmstate-handler-8b4xz\" (UID: \"63cd6170-52aa-4bfb-8376-0b7a8da3f64e\") " pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.009804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/98566240-794d-4769-8ca4-7e92f2e158cf-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-92nqv\" (UID: \"98566240-794d-4769-8ca4-7e92f2e158cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.019604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7lsm\" (UniqueName: \"kubernetes.io/projected/98566240-794d-4769-8ca4-7e92f2e158cf-kube-api-access-b7lsm\") pod \"nmstate-webhook-6b89b748d8-92nqv\" (UID: \"98566240-794d-4769-8ca4-7e92f2e158cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.020744 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qqgx\" (UniqueName: \"kubernetes.io/projected/4c2eda27-6d33-43b0-847a-7da2f657251e-kube-api-access-4qqgx\") pod \"nmstate-metrics-5dcf9c57c5-v5l6z\" (UID: \"4c2eda27-6d33-43b0-847a-7da2f657251e\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-v5l6z" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.047648 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-v5l6z" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.104030 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f77244-1065-4f49-9ab3-23f0fb4e24c9-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-k5lrd\" (UID: \"d4f77244-1065-4f49-9ab3-23f0fb4e24c9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.104477 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d4f77244-1065-4f49-9ab3-23f0fb4e24c9-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-k5lrd\" (UID: \"d4f77244-1065-4f49-9ab3-23f0fb4e24c9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.104510 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/63cd6170-52aa-4bfb-8376-0b7a8da3f64e-dbus-socket\") pod \"nmstate-handler-8b4xz\" (UID: \"63cd6170-52aa-4bfb-8376-0b7a8da3f64e\") " pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.104552 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/63cd6170-52aa-4bfb-8376-0b7a8da3f64e-nmstate-lock\") pod \"nmstate-handler-8b4xz\" (UID: \"63cd6170-52aa-4bfb-8376-0b7a8da3f64e\") " pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.104597 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp8c8\" (UniqueName: \"kubernetes.io/projected/d4f77244-1065-4f49-9ab3-23f0fb4e24c9-kube-api-access-vp8c8\") pod \"nmstate-console-plugin-5874bd7bc5-k5lrd\" (UID: \"d4f77244-1065-4f49-9ab3-23f0fb4e24c9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.104616 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/63cd6170-52aa-4bfb-8376-0b7a8da3f64e-ovs-socket\") pod \"nmstate-handler-8b4xz\" (UID: \"63cd6170-52aa-4bfb-8376-0b7a8da3f64e\") " pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.104646 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trwvl\" (UniqueName: \"kubernetes.io/projected/63cd6170-52aa-4bfb-8376-0b7a8da3f64e-kube-api-access-trwvl\") pod \"nmstate-handler-8b4xz\" (UID: \"63cd6170-52aa-4bfb-8376-0b7a8da3f64e\") " pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.105375 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/63cd6170-52aa-4bfb-8376-0b7a8da3f64e-nmstate-lock\") pod \"nmstate-handler-8b4xz\" (UID: \"63cd6170-52aa-4bfb-8376-0b7a8da3f64e\") " pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.105398 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/63cd6170-52aa-4bfb-8376-0b7a8da3f64e-ovs-socket\") pod \"nmstate-handler-8b4xz\" (UID: \"63cd6170-52aa-4bfb-8376-0b7a8da3f64e\") " pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.105438 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/63cd6170-52aa-4bfb-8376-0b7a8da3f64e-dbus-socket\") pod \"nmstate-handler-8b4xz\" (UID: \"63cd6170-52aa-4bfb-8376-0b7a8da3f64e\") " pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.115828 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.125444 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5ffb5f4cbf-vhnwr"] Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.128656 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.157222 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trwvl\" (UniqueName: \"kubernetes.io/projected/63cd6170-52aa-4bfb-8376-0b7a8da3f64e-kube-api-access-trwvl\") pod \"nmstate-handler-8b4xz\" (UID: \"63cd6170-52aa-4bfb-8376-0b7a8da3f64e\") " pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.177260 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5ffb5f4cbf-vhnwr"] Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.206709 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d4f77244-1065-4f49-9ab3-23f0fb4e24c9-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-k5lrd\" (UID: \"d4f77244-1065-4f49-9ab3-23f0fb4e24c9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.206801 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp8c8\" (UniqueName: \"kubernetes.io/projected/d4f77244-1065-4f49-9ab3-23f0fb4e24c9-kube-api-access-vp8c8\") pod \"nmstate-console-plugin-5874bd7bc5-k5lrd\" (UID: \"d4f77244-1065-4f49-9ab3-23f0fb4e24c9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.206844 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f77244-1065-4f49-9ab3-23f0fb4e24c9-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-k5lrd\" (UID: \"d4f77244-1065-4f49-9ab3-23f0fb4e24c9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" Nov 25 10:45:31 crc kubenswrapper[4813]: E1125 10:45:31.206987 4813 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 25 10:45:31 crc kubenswrapper[4813]: E1125 10:45:31.207041 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4f77244-1065-4f49-9ab3-23f0fb4e24c9-plugin-serving-cert podName:d4f77244-1065-4f49-9ab3-23f0fb4e24c9 nodeName:}" failed. No retries permitted until 2025-11-25 10:45:31.707022463 +0000 UTC m=+828.836732349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d4f77244-1065-4f49-9ab3-23f0fb4e24c9-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-k5lrd" (UID: "d4f77244-1065-4f49-9ab3-23f0fb4e24c9") : secret "plugin-serving-cert" not found Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.208033 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d4f77244-1065-4f49-9ab3-23f0fb4e24c9-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-k5lrd\" (UID: \"d4f77244-1065-4f49-9ab3-23f0fb4e24c9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.233950 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp8c8\" (UniqueName: \"kubernetes.io/projected/d4f77244-1065-4f49-9ab3-23f0fb4e24c9-kube-api-access-vp8c8\") pod \"nmstate-console-plugin-5874bd7bc5-k5lrd\" (UID: \"d4f77244-1065-4f49-9ab3-23f0fb4e24c9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.308562 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-service-ca\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.308601 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-trusted-ca-bundle\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.308651 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-console-oauth-config\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.308713 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-console-config\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.308738 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qvdz\" (UniqueName: \"kubernetes.io/projected/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-kube-api-access-6qvdz\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.308758 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-console-serving-cert\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.308787 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-oauth-serving-cert\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.321069 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-v5l6z"] Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.409707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-console-oauth-config\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.410045 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-console-config\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.410069 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qvdz\" (UniqueName: \"kubernetes.io/projected/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-kube-api-access-6qvdz\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.410096 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-console-serving-cert\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.410128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-oauth-serving-cert\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.410145 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-service-ca\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.410160 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-trusted-ca-bundle\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.411401 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-trusted-ca-bundle\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.413111 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-console-config\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.413170 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-oauth-serving-cert\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.413311 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-service-ca\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.415753 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-console-serving-cert\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.420638 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-console-oauth-config\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.429436 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qvdz\" (UniqueName: \"kubernetes.io/projected/10d8da62-ad09-41ab-a9d2-7aaf7b99c939-kube-api-access-6qvdz\") pod \"console-5ffb5f4cbf-vhnwr\" (UID: \"10d8da62-ad09-41ab-a9d2-7aaf7b99c939\") " pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.434433 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:31 crc kubenswrapper[4813]: W1125 10:45:31.451630 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63cd6170_52aa_4bfb_8376_0b7a8da3f64e.slice/crio-f8e062fda31193aa74a617763c62c44c13ec779bc058c5d4aaa15665f68f7ed0 WatchSource:0}: Error finding container f8e062fda31193aa74a617763c62c44c13ec779bc058c5d4aaa15665f68f7ed0: Status 404 returned error can't find the container with id f8e062fda31193aa74a617763c62c44c13ec779bc058c5d4aaa15665f68f7ed0 Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.485989 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.606004 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv"] Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.677986 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5ffb5f4cbf-vhnwr"] Nov 25 10:45:31 crc kubenswrapper[4813]: W1125 10:45:31.682626 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10d8da62_ad09_41ab_a9d2_7aaf7b99c939.slice/crio-627cd41c3e66201a2466890a0fd3422d62051bdd560935984b3303a95a334053 WatchSource:0}: Error finding container 627cd41c3e66201a2466890a0fd3422d62051bdd560935984b3303a95a334053: Status 404 returned error can't find the container with id 627cd41c3e66201a2466890a0fd3422d62051bdd560935984b3303a95a334053 Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.687655 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" event={"ID":"98566240-794d-4769-8ca4-7e92f2e158cf","Type":"ContainerStarted","Data":"fd1da3505e8f29c77fb3582ced523cd6d297550455bf40d648a7d0fa77a46f92"} Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.688608 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8b4xz" event={"ID":"63cd6170-52aa-4bfb-8376-0b7a8da3f64e","Type":"ContainerStarted","Data":"f8e062fda31193aa74a617763c62c44c13ec779bc058c5d4aaa15665f68f7ed0"} Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.689449 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-v5l6z" event={"ID":"4c2eda27-6d33-43b0-847a-7da2f657251e","Type":"ContainerStarted","Data":"9e4a27dc7df4f0f0f7aa152be1cdcc01eeb0e7bab78f499cc88a0b818df523ed"} Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.718107 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f77244-1065-4f49-9ab3-23f0fb4e24c9-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-k5lrd\" (UID: \"d4f77244-1065-4f49-9ab3-23f0fb4e24c9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.722764 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f77244-1065-4f49-9ab3-23f0fb4e24c9-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-k5lrd\" (UID: \"d4f77244-1065-4f49-9ab3-23f0fb4e24c9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" Nov 25 10:45:31 crc kubenswrapper[4813]: I1125 10:45:31.845398 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" Nov 25 10:45:32 crc kubenswrapper[4813]: I1125 10:45:32.242593 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd"] Nov 25 10:45:32 crc kubenswrapper[4813]: I1125 10:45:32.697560 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" event={"ID":"d4f77244-1065-4f49-9ab3-23f0fb4e24c9","Type":"ContainerStarted","Data":"db275c714c6c74236f72b7dbe180f042e1503ad4cbb32c1d38657da46502bde7"} Nov 25 10:45:32 crc kubenswrapper[4813]: I1125 10:45:32.699847 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5ffb5f4cbf-vhnwr" event={"ID":"10d8da62-ad09-41ab-a9d2-7aaf7b99c939","Type":"ContainerStarted","Data":"142488fa0cb608a5872eae31db9846f17601e6e206f942499074ac331f746960"} Nov 25 10:45:32 crc kubenswrapper[4813]: I1125 10:45:32.699872 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5ffb5f4cbf-vhnwr" event={"ID":"10d8da62-ad09-41ab-a9d2-7aaf7b99c939","Type":"ContainerStarted","Data":"627cd41c3e66201a2466890a0fd3422d62051bdd560935984b3303a95a334053"} Nov 25 10:45:32 crc kubenswrapper[4813]: I1125 10:45:32.720048 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5ffb5f4cbf-vhnwr" podStartSLOduration=1.720026702 podStartE2EDuration="1.720026702s" podCreationTimestamp="2025-11-25 10:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:45:32.71786969 +0000 UTC m=+829.847579586" watchObservedRunningTime="2025-11-25 10:45:32.720026702 +0000 UTC m=+829.849736588" Nov 25 10:45:36 crc kubenswrapper[4813]: I1125 10:45:36.733197 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" event={"ID":"98566240-794d-4769-8ca4-7e92f2e158cf","Type":"ContainerStarted","Data":"d593882db35408b5e4a692869cc7d8c65a9e4ccac0130ea1d466051130a6d2a3"} Nov 25 10:45:36 crc kubenswrapper[4813]: I1125 10:45:36.734054 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" Nov 25 10:45:36 crc kubenswrapper[4813]: I1125 10:45:36.738724 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8b4xz" event={"ID":"63cd6170-52aa-4bfb-8376-0b7a8da3f64e","Type":"ContainerStarted","Data":"4458b91209672fe893f967843d991f1465a95990ce1fca5dede3731dd17e6d26"} Nov 25 10:45:36 crc kubenswrapper[4813]: I1125 10:45:36.738854 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:36 crc kubenswrapper[4813]: I1125 10:45:36.740760 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-v5l6z" event={"ID":"4c2eda27-6d33-43b0-847a-7da2f657251e","Type":"ContainerStarted","Data":"9723edaa012f829f47f75c931e3533997c33bc1b26e452b925093c44712f779f"} Nov 25 10:45:36 crc kubenswrapper[4813]: I1125 10:45:36.764050 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" podStartSLOduration=2.075191912 podStartE2EDuration="6.764028945s" podCreationTimestamp="2025-11-25 10:45:30 +0000 UTC" firstStartedPulling="2025-11-25 10:45:31.629923349 +0000 UTC m=+828.759633235" lastFinishedPulling="2025-11-25 10:45:36.318760382 +0000 UTC m=+833.448470268" observedRunningTime="2025-11-25 10:45:36.757363125 +0000 UTC m=+833.887073031" watchObservedRunningTime="2025-11-25 10:45:36.764028945 +0000 UTC m=+833.893738831" Nov 25 10:45:36 crc kubenswrapper[4813]: I1125 10:45:36.778439 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8b4xz" podStartSLOduration=1.9780243610000001 podStartE2EDuration="6.778423925s" podCreationTimestamp="2025-11-25 10:45:30 +0000 UTC" firstStartedPulling="2025-11-25 10:45:31.45844418 +0000 UTC m=+828.588154066" lastFinishedPulling="2025-11-25 10:45:36.258843744 +0000 UTC m=+833.388553630" observedRunningTime="2025-11-25 10:45:36.776932392 +0000 UTC m=+833.906642288" watchObservedRunningTime="2025-11-25 10:45:36.778423925 +0000 UTC m=+833.908133811" Nov 25 10:45:37 crc kubenswrapper[4813]: I1125 10:45:37.754070 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" event={"ID":"d4f77244-1065-4f49-9ab3-23f0fb4e24c9","Type":"ContainerStarted","Data":"0f3663578f922030a8ef49d3026484dc824d0165fcc1e8b1409ce7593b3bf8e2"} Nov 25 10:45:37 crc kubenswrapper[4813]: I1125 10:45:37.772807 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-k5lrd" podStartSLOduration=2.886187197 podStartE2EDuration="7.772778269s" podCreationTimestamp="2025-11-25 10:45:30 +0000 UTC" firstStartedPulling="2025-11-25 10:45:32.253968756 +0000 UTC m=+829.383678642" lastFinishedPulling="2025-11-25 10:45:37.140559828 +0000 UTC m=+834.270269714" observedRunningTime="2025-11-25 10:45:37.769585988 +0000 UTC m=+834.899295894" watchObservedRunningTime="2025-11-25 10:45:37.772778269 +0000 UTC m=+834.902488175" Nov 25 10:45:38 crc kubenswrapper[4813]: I1125 10:45:38.767583 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-v5l6z" event={"ID":"4c2eda27-6d33-43b0-847a-7da2f657251e","Type":"ContainerStarted","Data":"cf9a7dd4644863c828ce75b5cdb7cd6def24a4f2ac523d766d446d6448149d8b"} Nov 25 10:45:38 crc kubenswrapper[4813]: I1125 10:45:38.785169 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-v5l6z" podStartSLOduration=1.550108513 podStartE2EDuration="8.785127356s" podCreationTimestamp="2025-11-25 10:45:30 +0000 UTC" firstStartedPulling="2025-11-25 10:45:31.333872789 +0000 UTC m=+828.463582675" lastFinishedPulling="2025-11-25 10:45:38.568891632 +0000 UTC m=+835.698601518" observedRunningTime="2025-11-25 10:45:38.782335316 +0000 UTC m=+835.912045212" watchObservedRunningTime="2025-11-25 10:45:38.785127356 +0000 UTC m=+835.914837252" Nov 25 10:45:41 crc kubenswrapper[4813]: I1125 10:45:41.459221 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8b4xz" Nov 25 10:45:41 crc kubenswrapper[4813]: I1125 10:45:41.487086 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:41 crc kubenswrapper[4813]: I1125 10:45:41.487172 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:41 crc kubenswrapper[4813]: I1125 10:45:41.493191 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:41 crc kubenswrapper[4813]: I1125 10:45:41.791840 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5ffb5f4cbf-vhnwr" Nov 25 10:45:41 crc kubenswrapper[4813]: I1125 10:45:41.852708 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rpfp2"] Nov 25 10:45:51 crc kubenswrapper[4813]: I1125 10:45:51.123401 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-92nqv" Nov 25 10:45:51 crc kubenswrapper[4813]: I1125 10:45:51.967368 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:45:51 crc kubenswrapper[4813]: I1125 10:45:51.967443 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.417479 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4"] Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.419573 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.422403 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.426911 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4"] Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.516266 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4\" (UID: \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.516353 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hknhg\" (UniqueName: \"kubernetes.io/projected/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-kube-api-access-hknhg\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4\" (UID: \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.516416 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4\" (UID: \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.617363 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4\" (UID: \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.617453 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hknhg\" (UniqueName: \"kubernetes.io/projected/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-kube-api-access-hknhg\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4\" (UID: \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.617510 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4\" (UID: \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.618211 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4\" (UID: \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.618311 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4\" (UID: \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.646942 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hknhg\" (UniqueName: \"kubernetes.io/projected/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-kube-api-access-hknhg\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4\" (UID: \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:05 crc kubenswrapper[4813]: I1125 10:46:05.737069 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:06 crc kubenswrapper[4813]: I1125 10:46:06.151495 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4"] Nov 25 10:46:06 crc kubenswrapper[4813]: I1125 10:46:06.893006 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-rpfp2" podUID="17571cbf-de36-4b34-af0b-3db7493adaf4" containerName="console" containerID="cri-o://003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a" gracePeriod=15 Nov 25 10:46:06 crc kubenswrapper[4813]: I1125 10:46:06.948844 4813 generic.go:334] "Generic (PLEG): container finished" podID="c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" containerID="058381144b81d2f18d8d091b4bf2e507cf6dd10515c92f6140cad328ab2914a7" exitCode=0 Nov 25 10:46:06 crc kubenswrapper[4813]: I1125 10:46:06.948930 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" event={"ID":"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54","Type":"ContainerDied","Data":"058381144b81d2f18d8d091b4bf2e507cf6dd10515c92f6140cad328ab2914a7"} Nov 25 10:46:06 crc kubenswrapper[4813]: I1125 10:46:06.948968 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" event={"ID":"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54","Type":"ContainerStarted","Data":"322bac84193b386161ffc64fcfc955987c5accff2c07ec633aa3c2a16f2b3906"} Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.254383 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rpfp2_17571cbf-de36-4b34-af0b-3db7493adaf4/console/0.log" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.254881 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.350999 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-service-ca\") pod \"17571cbf-de36-4b34-af0b-3db7493adaf4\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.351076 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17571cbf-de36-4b34-af0b-3db7493adaf4-console-oauth-config\") pod \"17571cbf-de36-4b34-af0b-3db7493adaf4\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.351135 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p68fn\" (UniqueName: \"kubernetes.io/projected/17571cbf-de36-4b34-af0b-3db7493adaf4-kube-api-access-p68fn\") pod \"17571cbf-de36-4b34-af0b-3db7493adaf4\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.351181 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-trusted-ca-bundle\") pod \"17571cbf-de36-4b34-af0b-3db7493adaf4\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.351205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17571cbf-de36-4b34-af0b-3db7493adaf4-console-serving-cert\") pod \"17571cbf-de36-4b34-af0b-3db7493adaf4\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.351244 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-console-config\") pod \"17571cbf-de36-4b34-af0b-3db7493adaf4\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.351284 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-oauth-serving-cert\") pod \"17571cbf-de36-4b34-af0b-3db7493adaf4\" (UID: \"17571cbf-de36-4b34-af0b-3db7493adaf4\") " Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.352851 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "17571cbf-de36-4b34-af0b-3db7493adaf4" (UID: "17571cbf-de36-4b34-af0b-3db7493adaf4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.353014 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-service-ca" (OuterVolumeSpecName: "service-ca") pod "17571cbf-de36-4b34-af0b-3db7493adaf4" (UID: "17571cbf-de36-4b34-af0b-3db7493adaf4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.353284 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-console-config" (OuterVolumeSpecName: "console-config") pod "17571cbf-de36-4b34-af0b-3db7493adaf4" (UID: "17571cbf-de36-4b34-af0b-3db7493adaf4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.353515 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "17571cbf-de36-4b34-af0b-3db7493adaf4" (UID: "17571cbf-de36-4b34-af0b-3db7493adaf4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.358259 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17571cbf-de36-4b34-af0b-3db7493adaf4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "17571cbf-de36-4b34-af0b-3db7493adaf4" (UID: "17571cbf-de36-4b34-af0b-3db7493adaf4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.364167 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17571cbf-de36-4b34-af0b-3db7493adaf4-kube-api-access-p68fn" (OuterVolumeSpecName: "kube-api-access-p68fn") pod "17571cbf-de36-4b34-af0b-3db7493adaf4" (UID: "17571cbf-de36-4b34-af0b-3db7493adaf4"). InnerVolumeSpecName "kube-api-access-p68fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.364985 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17571cbf-de36-4b34-af0b-3db7493adaf4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "17571cbf-de36-4b34-af0b-3db7493adaf4" (UID: "17571cbf-de36-4b34-af0b-3db7493adaf4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.452486 4813 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17571cbf-de36-4b34-af0b-3db7493adaf4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.452550 4813 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.452562 4813 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.452619 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.452629 4813 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17571cbf-de36-4b34-af0b-3db7493adaf4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.452640 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p68fn\" (UniqueName: \"kubernetes.io/projected/17571cbf-de36-4b34-af0b-3db7493adaf4-kube-api-access-p68fn\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.452651 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17571cbf-de36-4b34-af0b-3db7493adaf4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.958259 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rpfp2_17571cbf-de36-4b34-af0b-3db7493adaf4/console/0.log" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.959140 4813 generic.go:334] "Generic (PLEG): container finished" podID="17571cbf-de36-4b34-af0b-3db7493adaf4" containerID="003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a" exitCode=2 Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.959266 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rpfp2" event={"ID":"17571cbf-de36-4b34-af0b-3db7493adaf4","Type":"ContainerDied","Data":"003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a"} Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.959384 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rpfp2" event={"ID":"17571cbf-de36-4b34-af0b-3db7493adaf4","Type":"ContainerDied","Data":"4d47be0aed0c302547e18e8ddfd27cf1643aabc2810b91a7b34a27127b04dddb"} Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.959488 4813 scope.go:117] "RemoveContainer" containerID="003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.959707 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rpfp2" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.982738 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rpfp2"] Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.985940 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-rpfp2"] Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.997730 4813 scope.go:117] "RemoveContainer" containerID="003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a" Nov 25 10:46:07 crc kubenswrapper[4813]: E1125 10:46:07.998181 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a\": container with ID starting with 003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a not found: ID does not exist" containerID="003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a" Nov 25 10:46:07 crc kubenswrapper[4813]: I1125 10:46:07.998216 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a"} err="failed to get container status \"003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a\": rpc error: code = NotFound desc = could not find container \"003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a\": container with ID starting with 003bf407160049d89860e28356ad43e4e32d4913802425bddffc6e315e5a288a not found: ID does not exist" Nov 25 10:46:09 crc kubenswrapper[4813]: I1125 10:46:09.630625 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17571cbf-de36-4b34-af0b-3db7493adaf4" path="/var/lib/kubelet/pods/17571cbf-de36-4b34-af0b-3db7493adaf4/volumes" Nov 25 10:46:10 crc kubenswrapper[4813]: I1125 10:46:10.981958 4813 generic.go:334] "Generic (PLEG): container finished" podID="c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" containerID="5f1bd07afa6fd700c671f44897a4571557fd1748f84851a4f2c89bbe2129c77e" exitCode=0 Nov 25 10:46:10 crc kubenswrapper[4813]: I1125 10:46:10.982042 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" event={"ID":"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54","Type":"ContainerDied","Data":"5f1bd07afa6fd700c671f44897a4571557fd1748f84851a4f2c89bbe2129c77e"} Nov 25 10:46:11 crc kubenswrapper[4813]: I1125 10:46:11.991946 4813 generic.go:334] "Generic (PLEG): container finished" podID="c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" containerID="93337c18872af7bc30383875c6803b4406d3657a9d9cc5ebca09686a213634b9" exitCode=0 Nov 25 10:46:11 crc kubenswrapper[4813]: I1125 10:46:11.992104 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" event={"ID":"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54","Type":"ContainerDied","Data":"93337c18872af7bc30383875c6803b4406d3657a9d9cc5ebca09686a213634b9"} Nov 25 10:46:13 crc kubenswrapper[4813]: I1125 10:46:13.237339 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:13 crc kubenswrapper[4813]: I1125 10:46:13.348066 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-util\") pod \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\" (UID: \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\") " Nov 25 10:46:13 crc kubenswrapper[4813]: I1125 10:46:13.348217 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hknhg\" (UniqueName: \"kubernetes.io/projected/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-kube-api-access-hknhg\") pod \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\" (UID: \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\") " Nov 25 10:46:13 crc kubenswrapper[4813]: I1125 10:46:13.348305 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-bundle\") pod \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\" (UID: \"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54\") " Nov 25 10:46:13 crc kubenswrapper[4813]: I1125 10:46:13.349360 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-bundle" (OuterVolumeSpecName: "bundle") pod "c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" (UID: "c984cd3e-d5a2-42ac-8b6d-549a77d8ae54"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:46:13 crc kubenswrapper[4813]: I1125 10:46:13.355606 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-kube-api-access-hknhg" (OuterVolumeSpecName: "kube-api-access-hknhg") pod "c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" (UID: "c984cd3e-d5a2-42ac-8b6d-549a77d8ae54"). InnerVolumeSpecName "kube-api-access-hknhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:46:13 crc kubenswrapper[4813]: I1125 10:46:13.359272 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-util" (OuterVolumeSpecName: "util") pod "c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" (UID: "c984cd3e-d5a2-42ac-8b6d-549a77d8ae54"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:46:13 crc kubenswrapper[4813]: I1125 10:46:13.449719 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hknhg\" (UniqueName: \"kubernetes.io/projected/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-kube-api-access-hknhg\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:13 crc kubenswrapper[4813]: I1125 10:46:13.449766 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:13 crc kubenswrapper[4813]: I1125 10:46:13.449775 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c984cd3e-d5a2-42ac-8b6d-549a77d8ae54-util\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:14 crc kubenswrapper[4813]: I1125 10:46:14.007154 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" event={"ID":"c984cd3e-d5a2-42ac-8b6d-549a77d8ae54","Type":"ContainerDied","Data":"322bac84193b386161ffc64fcfc955987c5accff2c07ec633aa3c2a16f2b3906"} Nov 25 10:46:14 crc kubenswrapper[4813]: I1125 10:46:14.007555 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="322bac84193b386161ffc64fcfc955987c5accff2c07ec633aa3c2a16f2b3906" Nov 25 10:46:14 crc kubenswrapper[4813]: I1125 10:46:14.007224 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6fbss4" Nov 25 10:46:21 crc kubenswrapper[4813]: I1125 10:46:21.967277 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:46:21 crc kubenswrapper[4813]: I1125 10:46:21.967931 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:46:21 crc kubenswrapper[4813]: I1125 10:46:21.967986 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:46:21 crc kubenswrapper[4813]: I1125 10:46:21.968693 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94199ba3a0acbc10bf1b1d8a9e55614a98ff3a435215d0c63b967639b76f1985"} pod="openshift-machine-config-operator/machine-config-daemon-knhz8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:46:21 crc kubenswrapper[4813]: I1125 10:46:21.968763 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" containerID="cri-o://94199ba3a0acbc10bf1b1d8a9e55614a98ff3a435215d0c63b967639b76f1985" gracePeriod=600 Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.053821 4813 generic.go:334] "Generic (PLEG): container finished" podID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerID="94199ba3a0acbc10bf1b1d8a9e55614a98ff3a435215d0c63b967639b76f1985" exitCode=0 Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.053898 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerDied","Data":"94199ba3a0acbc10bf1b1d8a9e55614a98ff3a435215d0c63b967639b76f1985"} Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.054393 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerStarted","Data":"efbe54cb2ef6c89c7fb03c162ec904d1deff9a1b48f07c1332fb33b84a4f4c6c"} Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.054417 4813 scope.go:117] "RemoveContainer" containerID="cb4d567d43fddcd717213e7940966b7b25b43b79bdbef12af12d619770788967" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.499266 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7"] Nov 25 10:46:23 crc kubenswrapper[4813]: E1125 10:46:23.499561 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17571cbf-de36-4b34-af0b-3db7493adaf4" containerName="console" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.499578 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="17571cbf-de36-4b34-af0b-3db7493adaf4" containerName="console" Nov 25 10:46:23 crc kubenswrapper[4813]: E1125 10:46:23.499592 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" containerName="util" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.499601 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" containerName="util" Nov 25 10:46:23 crc kubenswrapper[4813]: E1125 10:46:23.499625 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" containerName="pull" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.499634 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" containerName="pull" Nov 25 10:46:23 crc kubenswrapper[4813]: E1125 10:46:23.499644 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" containerName="extract" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.499652 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" containerName="extract" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.499788 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="17571cbf-de36-4b34-af0b-3db7493adaf4" containerName="console" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.499809 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c984cd3e-d5a2-42ac-8b6d-549a77d8ae54" containerName="extract" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.500341 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.502415 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.502749 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.503038 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xhjqf" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.503076 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.504944 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.509653 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7"] Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.683369 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8v67\" (UniqueName: \"kubernetes.io/projected/a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b-kube-api-access-k8v67\") pod \"metallb-operator-controller-manager-6b84b955f5-mmrm7\" (UID: \"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b\") " pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.683738 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b-webhook-cert\") pod \"metallb-operator-controller-manager-6b84b955f5-mmrm7\" (UID: \"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b\") " pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.683798 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b-apiservice-cert\") pod \"metallb-operator-controller-manager-6b84b955f5-mmrm7\" (UID: \"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b\") " pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.741486 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt"] Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.742341 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.747086 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-g4q7s" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.747168 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.747173 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.756207 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt"] Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.785040 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8v67\" (UniqueName: \"kubernetes.io/projected/a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b-kube-api-access-k8v67\") pod \"metallb-operator-controller-manager-6b84b955f5-mmrm7\" (UID: \"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b\") " pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.785101 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b-webhook-cert\") pod \"metallb-operator-controller-manager-6b84b955f5-mmrm7\" (UID: \"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b\") " pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.785178 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b-apiservice-cert\") pod \"metallb-operator-controller-manager-6b84b955f5-mmrm7\" (UID: \"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b\") " pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.794612 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b-webhook-cert\") pod \"metallb-operator-controller-manager-6b84b955f5-mmrm7\" (UID: \"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b\") " pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.797413 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b-apiservice-cert\") pod \"metallb-operator-controller-manager-6b84b955f5-mmrm7\" (UID: \"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b\") " pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.803252 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8v67\" (UniqueName: \"kubernetes.io/projected/a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b-kube-api-access-k8v67\") pod \"metallb-operator-controller-manager-6b84b955f5-mmrm7\" (UID: \"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b\") " pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.814693 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.886830 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1679876e-16fe-4437-a0d5-05f978057c2d-apiservice-cert\") pod \"metallb-operator-webhook-server-546d569f67-5bbtt\" (UID: \"1679876e-16fe-4437-a0d5-05f978057c2d\") " pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.886877 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1679876e-16fe-4437-a0d5-05f978057c2d-webhook-cert\") pod \"metallb-operator-webhook-server-546d569f67-5bbtt\" (UID: \"1679876e-16fe-4437-a0d5-05f978057c2d\") " pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.887143 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhjt8\" (UniqueName: \"kubernetes.io/projected/1679876e-16fe-4437-a0d5-05f978057c2d-kube-api-access-hhjt8\") pod \"metallb-operator-webhook-server-546d569f67-5bbtt\" (UID: \"1679876e-16fe-4437-a0d5-05f978057c2d\") " pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.988812 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhjt8\" (UniqueName: \"kubernetes.io/projected/1679876e-16fe-4437-a0d5-05f978057c2d-kube-api-access-hhjt8\") pod \"metallb-operator-webhook-server-546d569f67-5bbtt\" (UID: \"1679876e-16fe-4437-a0d5-05f978057c2d\") " pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.989277 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1679876e-16fe-4437-a0d5-05f978057c2d-apiservice-cert\") pod \"metallb-operator-webhook-server-546d569f67-5bbtt\" (UID: \"1679876e-16fe-4437-a0d5-05f978057c2d\") " pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.989330 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1679876e-16fe-4437-a0d5-05f978057c2d-webhook-cert\") pod \"metallb-operator-webhook-server-546d569f67-5bbtt\" (UID: \"1679876e-16fe-4437-a0d5-05f978057c2d\") " pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:23 crc kubenswrapper[4813]: I1125 10:46:23.997239 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1679876e-16fe-4437-a0d5-05f978057c2d-webhook-cert\") pod \"metallb-operator-webhook-server-546d569f67-5bbtt\" (UID: \"1679876e-16fe-4437-a0d5-05f978057c2d\") " pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:24 crc kubenswrapper[4813]: I1125 10:46:24.004902 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1679876e-16fe-4437-a0d5-05f978057c2d-apiservice-cert\") pod \"metallb-operator-webhook-server-546d569f67-5bbtt\" (UID: \"1679876e-16fe-4437-a0d5-05f978057c2d\") " pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:24 crc kubenswrapper[4813]: I1125 10:46:24.010738 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhjt8\" (UniqueName: \"kubernetes.io/projected/1679876e-16fe-4437-a0d5-05f978057c2d-kube-api-access-hhjt8\") pod \"metallb-operator-webhook-server-546d569f67-5bbtt\" (UID: \"1679876e-16fe-4437-a0d5-05f978057c2d\") " pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:24 crc kubenswrapper[4813]: I1125 10:46:24.058144 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:24 crc kubenswrapper[4813]: I1125 10:46:24.300534 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt"] Nov 25 10:46:24 crc kubenswrapper[4813]: I1125 10:46:24.303886 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7"] Nov 25 10:46:24 crc kubenswrapper[4813]: W1125 10:46:24.318281 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6eb0ffd_2e55_4d5a_9ac7_19b25ba6ec8b.slice/crio-4cd9a5b72c749521f4412b80fbef4b42be4eb5ec63e17e8604a8c003dbf02066 WatchSource:0}: Error finding container 4cd9a5b72c749521f4412b80fbef4b42be4eb5ec63e17e8604a8c003dbf02066: Status 404 returned error can't find the container with id 4cd9a5b72c749521f4412b80fbef4b42be4eb5ec63e17e8604a8c003dbf02066 Nov 25 10:46:25 crc kubenswrapper[4813]: I1125 10:46:25.094745 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" event={"ID":"1679876e-16fe-4437-a0d5-05f978057c2d","Type":"ContainerStarted","Data":"bf70183611fcf1f5fdbde27171a6770391bacf38c6ea1312c97a806bf12c0cf9"} Nov 25 10:46:25 crc kubenswrapper[4813]: I1125 10:46:25.097883 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" event={"ID":"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b","Type":"ContainerStarted","Data":"4cd9a5b72c749521f4412b80fbef4b42be4eb5ec63e17e8604a8c003dbf02066"} Nov 25 10:46:30 crc kubenswrapper[4813]: I1125 10:46:30.130568 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" event={"ID":"1679876e-16fe-4437-a0d5-05f978057c2d","Type":"ContainerStarted","Data":"30232a485734197087d620812accf7fa0cd0ea00324851419e5bd2a2f79879a8"} Nov 25 10:46:30 crc kubenswrapper[4813]: I1125 10:46:30.131443 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:30 crc kubenswrapper[4813]: I1125 10:46:30.133522 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" event={"ID":"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b","Type":"ContainerStarted","Data":"0bea679701fb92dd51b86000dddec84983c7baac6e6090c8a3567ede6024ce13"} Nov 25 10:46:30 crc kubenswrapper[4813]: I1125 10:46:30.133835 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:46:30 crc kubenswrapper[4813]: I1125 10:46:30.179365 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" podStartSLOduration=2.326594472 podStartE2EDuration="7.179346657s" podCreationTimestamp="2025-11-25 10:46:23 +0000 UTC" firstStartedPulling="2025-11-25 10:46:24.324456756 +0000 UTC m=+881.454166642" lastFinishedPulling="2025-11-25 10:46:29.177208931 +0000 UTC m=+886.306918827" observedRunningTime="2025-11-25 10:46:30.153934523 +0000 UTC m=+887.283644469" watchObservedRunningTime="2025-11-25 10:46:30.179346657 +0000 UTC m=+887.309056533" Nov 25 10:46:30 crc kubenswrapper[4813]: I1125 10:46:30.180384 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" podStartSLOduration=2.345277844 podStartE2EDuration="7.180378607s" podCreationTimestamp="2025-11-25 10:46:23 +0000 UTC" firstStartedPulling="2025-11-25 10:46:24.322025656 +0000 UTC m=+881.451735542" lastFinishedPulling="2025-11-25 10:46:29.157126429 +0000 UTC m=+886.286836305" observedRunningTime="2025-11-25 10:46:30.177863695 +0000 UTC m=+887.307573601" watchObservedRunningTime="2025-11-25 10:46:30.180378607 +0000 UTC m=+887.310088483" Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.427739 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f75dt"] Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.430384 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.442201 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f75dt"] Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.531730 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h2jx\" (UniqueName: \"kubernetes.io/projected/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-kube-api-access-2h2jx\") pod \"redhat-marketplace-f75dt\" (UID: \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\") " pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.531801 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-utilities\") pod \"redhat-marketplace-f75dt\" (UID: \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\") " pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.532539 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-catalog-content\") pod \"redhat-marketplace-f75dt\" (UID: \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\") " pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.635447 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-utilities\") pod \"redhat-marketplace-f75dt\" (UID: \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\") " pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.635522 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-catalog-content\") pod \"redhat-marketplace-f75dt\" (UID: \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\") " pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.635611 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h2jx\" (UniqueName: \"kubernetes.io/projected/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-kube-api-access-2h2jx\") pod \"redhat-marketplace-f75dt\" (UID: \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\") " pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.636219 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-utilities\") pod \"redhat-marketplace-f75dt\" (UID: \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\") " pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.636392 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-catalog-content\") pod \"redhat-marketplace-f75dt\" (UID: \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\") " pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.663378 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h2jx\" (UniqueName: \"kubernetes.io/projected/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-kube-api-access-2h2jx\") pod \"redhat-marketplace-f75dt\" (UID: \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\") " pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.749847 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:40 crc kubenswrapper[4813]: I1125 10:46:40.999004 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f75dt"] Nov 25 10:46:41 crc kubenswrapper[4813]: I1125 10:46:41.197662 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f75dt" event={"ID":"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038","Type":"ContainerStarted","Data":"5df2c5f8d32840967bcddd320c92fae051f612f96c4c9207bdfea5b9d7940fa2"} Nov 25 10:46:41 crc kubenswrapper[4813]: I1125 10:46:41.198073 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f75dt" event={"ID":"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038","Type":"ContainerStarted","Data":"3bef72e5b0511b7bf21341b0680fad31eda5270e08d1a8e08cf464eed5dce58a"} Nov 25 10:46:42 crc kubenswrapper[4813]: I1125 10:46:42.203901 4813 generic.go:334] "Generic (PLEG): container finished" podID="3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" containerID="5df2c5f8d32840967bcddd320c92fae051f612f96c4c9207bdfea5b9d7940fa2" exitCode=0 Nov 25 10:46:42 crc kubenswrapper[4813]: I1125 10:46:42.203943 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f75dt" event={"ID":"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038","Type":"ContainerDied","Data":"5df2c5f8d32840967bcddd320c92fae051f612f96c4c9207bdfea5b9d7940fa2"} Nov 25 10:46:43 crc kubenswrapper[4813]: I1125 10:46:43.212635 4813 generic.go:334] "Generic (PLEG): container finished" podID="3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" containerID="5554f22366a8e9c52f39038d481f91a6833c353511f3ca6162f0231933ea94ca" exitCode=0 Nov 25 10:46:43 crc kubenswrapper[4813]: I1125 10:46:43.212701 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f75dt" event={"ID":"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038","Type":"ContainerDied","Data":"5554f22366a8e9c52f39038d481f91a6833c353511f3ca6162f0231933ea94ca"} Nov 25 10:46:44 crc kubenswrapper[4813]: I1125 10:46:44.067143 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-546d569f67-5bbtt" Nov 25 10:46:44 crc kubenswrapper[4813]: I1125 10:46:44.220482 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f75dt" event={"ID":"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038","Type":"ContainerStarted","Data":"b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484"} Nov 25 10:46:44 crc kubenswrapper[4813]: I1125 10:46:44.239352 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f75dt" podStartSLOduration=2.796362734 podStartE2EDuration="4.239328564s" podCreationTimestamp="2025-11-25 10:46:40 +0000 UTC" firstStartedPulling="2025-11-25 10:46:42.205752388 +0000 UTC m=+899.335462264" lastFinishedPulling="2025-11-25 10:46:43.648718208 +0000 UTC m=+900.778428094" observedRunningTime="2025-11-25 10:46:44.236541544 +0000 UTC m=+901.366251450" watchObservedRunningTime="2025-11-25 10:46:44.239328564 +0000 UTC m=+901.369038450" Nov 25 10:46:50 crc kubenswrapper[4813]: I1125 10:46:50.750015 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:50 crc kubenswrapper[4813]: I1125 10:46:50.750723 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:50 crc kubenswrapper[4813]: I1125 10:46:50.797000 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:51 crc kubenswrapper[4813]: I1125 10:46:51.318838 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:53 crc kubenswrapper[4813]: I1125 10:46:53.218339 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f75dt"] Nov 25 10:46:53 crc kubenswrapper[4813]: I1125 10:46:53.281421 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f75dt" podUID="3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" containerName="registry-server" containerID="cri-o://b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484" gracePeriod=2 Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.155827 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.289845 4813 generic.go:334] "Generic (PLEG): container finished" podID="3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" containerID="b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484" exitCode=0 Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.289943 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f75dt" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.289917 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f75dt" event={"ID":"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038","Type":"ContainerDied","Data":"b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484"} Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.290121 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f75dt" event={"ID":"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038","Type":"ContainerDied","Data":"3bef72e5b0511b7bf21341b0680fad31eda5270e08d1a8e08cf464eed5dce58a"} Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.290149 4813 scope.go:117] "RemoveContainer" containerID="b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.336455 4813 scope.go:117] "RemoveContainer" containerID="5554f22366a8e9c52f39038d481f91a6833c353511f3ca6162f0231933ea94ca" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.337278 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h2jx\" (UniqueName: \"kubernetes.io/projected/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-kube-api-access-2h2jx\") pod \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\" (UID: \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\") " Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.337431 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-catalog-content\") pod \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\" (UID: \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\") " Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.337479 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-utilities\") pod \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\" (UID: \"3482a89d-4a9f-42fc-8a1e-6a8aee3d0038\") " Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.338882 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-utilities" (OuterVolumeSpecName: "utilities") pod "3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" (UID: "3482a89d-4a9f-42fc-8a1e-6a8aee3d0038"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.346123 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-kube-api-access-2h2jx" (OuterVolumeSpecName: "kube-api-access-2h2jx") pod "3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" (UID: "3482a89d-4a9f-42fc-8a1e-6a8aee3d0038"). InnerVolumeSpecName "kube-api-access-2h2jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.368455 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" (UID: "3482a89d-4a9f-42fc-8a1e-6a8aee3d0038"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.401973 4813 scope.go:117] "RemoveContainer" containerID="5df2c5f8d32840967bcddd320c92fae051f612f96c4c9207bdfea5b9d7940fa2" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.420892 4813 scope.go:117] "RemoveContainer" containerID="b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484" Nov 25 10:46:54 crc kubenswrapper[4813]: E1125 10:46:54.421545 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484\": container with ID starting with b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484 not found: ID does not exist" containerID="b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.421592 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484"} err="failed to get container status \"b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484\": rpc error: code = NotFound desc = could not find container \"b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484\": container with ID starting with b3473e1143434e8d5b8de2390c230e7fa308c73126b4ba28eb2362d6f478b484 not found: ID does not exist" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.421627 4813 scope.go:117] "RemoveContainer" containerID="5554f22366a8e9c52f39038d481f91a6833c353511f3ca6162f0231933ea94ca" Nov 25 10:46:54 crc kubenswrapper[4813]: E1125 10:46:54.422797 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5554f22366a8e9c52f39038d481f91a6833c353511f3ca6162f0231933ea94ca\": container with ID starting with 5554f22366a8e9c52f39038d481f91a6833c353511f3ca6162f0231933ea94ca not found: ID does not exist" containerID="5554f22366a8e9c52f39038d481f91a6833c353511f3ca6162f0231933ea94ca" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.422843 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5554f22366a8e9c52f39038d481f91a6833c353511f3ca6162f0231933ea94ca"} err="failed to get container status \"5554f22366a8e9c52f39038d481f91a6833c353511f3ca6162f0231933ea94ca\": rpc error: code = NotFound desc = could not find container \"5554f22366a8e9c52f39038d481f91a6833c353511f3ca6162f0231933ea94ca\": container with ID starting with 5554f22366a8e9c52f39038d481f91a6833c353511f3ca6162f0231933ea94ca not found: ID does not exist" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.422872 4813 scope.go:117] "RemoveContainer" containerID="5df2c5f8d32840967bcddd320c92fae051f612f96c4c9207bdfea5b9d7940fa2" Nov 25 10:46:54 crc kubenswrapper[4813]: E1125 10:46:54.423159 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df2c5f8d32840967bcddd320c92fae051f612f96c4c9207bdfea5b9d7940fa2\": container with ID starting with 5df2c5f8d32840967bcddd320c92fae051f612f96c4c9207bdfea5b9d7940fa2 not found: ID does not exist" containerID="5df2c5f8d32840967bcddd320c92fae051f612f96c4c9207bdfea5b9d7940fa2" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.423185 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df2c5f8d32840967bcddd320c92fae051f612f96c4c9207bdfea5b9d7940fa2"} err="failed to get container status \"5df2c5f8d32840967bcddd320c92fae051f612f96c4c9207bdfea5b9d7940fa2\": rpc error: code = NotFound desc = could not find container \"5df2c5f8d32840967bcddd320c92fae051f612f96c4c9207bdfea5b9d7940fa2\": container with ID starting with 5df2c5f8d32840967bcddd320c92fae051f612f96c4c9207bdfea5b9d7940fa2 not found: ID does not exist" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.439373 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h2jx\" (UniqueName: \"kubernetes.io/projected/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-kube-api-access-2h2jx\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.439422 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.439437 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.617655 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f75dt"] Nov 25 10:46:54 crc kubenswrapper[4813]: I1125 10:46:54.623379 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f75dt"] Nov 25 10:46:55 crc kubenswrapper[4813]: I1125 10:46:55.629977 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" path="/var/lib/kubelet/pods/3482a89d-4a9f-42fc-8a1e-6a8aee3d0038/volumes" Nov 25 10:47:03 crc kubenswrapper[4813]: I1125 10:47:03.818698 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.590037 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv"] Nov 25 10:47:04 crc kubenswrapper[4813]: E1125 10:47:04.590293 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" containerName="extract-content" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.590311 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" containerName="extract-content" Nov 25 10:47:04 crc kubenswrapper[4813]: E1125 10:47:04.590323 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" containerName="registry-server" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.590331 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" containerName="registry-server" Nov 25 10:47:04 crc kubenswrapper[4813]: E1125 10:47:04.590344 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" containerName="extract-utilities" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.590351 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" containerName="extract-utilities" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.590446 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3482a89d-4a9f-42fc-8a1e-6a8aee3d0038" containerName="registry-server" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.590904 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.593885 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-ppl7c" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.598365 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.601319 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-z9zl6"] Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.603825 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.606720 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.607033 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.627603 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv"] Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.704098 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gwnrv"] Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.705290 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gwnrv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.711998 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.712643 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-q54ck" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.712870 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.712883 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.715080 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-brpp6"] Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.716156 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.722025 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.748821 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-brpp6"] Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.778006 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49ae2f6f-61f5-4577-ad9f-cce3678795ef-cert\") pod \"frr-k8s-webhook-server-6998585d5-4mmlv\" (UID: \"49ae2f6f-61f5-4577-ad9f-cce3678795ef\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.778063 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/851ec932-482a-43c0-a100-ee8378bb527e-reloader\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.778088 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgjwb\" (UniqueName: \"kubernetes.io/projected/49ae2f6f-61f5-4577-ad9f-cce3678795ef-kube-api-access-kgjwb\") pod \"frr-k8s-webhook-server-6998585d5-4mmlv\" (UID: \"49ae2f6f-61f5-4577-ad9f-cce3678795ef\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.778108 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/851ec932-482a-43c0-a100-ee8378bb527e-frr-sockets\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.778133 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/851ec932-482a-43c0-a100-ee8378bb527e-frr-startup\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.778155 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/851ec932-482a-43c0-a100-ee8378bb527e-metrics\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.778173 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/851ec932-482a-43c0-a100-ee8378bb527e-metrics-certs\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.778326 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/851ec932-482a-43c0-a100-ee8378bb527e-frr-conf\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.778354 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njvg6\" (UniqueName: \"kubernetes.io/projected/851ec932-482a-43c0-a100-ee8378bb527e-kube-api-access-njvg6\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879320 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49ae2f6f-61f5-4577-ad9f-cce3678795ef-cert\") pod \"frr-k8s-webhook-server-6998585d5-4mmlv\" (UID: \"49ae2f6f-61f5-4577-ad9f-cce3678795ef\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879389 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/269e623a-f673-45c5-8377-29b4d98a8778-cert\") pod \"controller-6c7b4b5f48-brpp6\" (UID: \"269e623a-f673-45c5-8377-29b4d98a8778\") " pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879432 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/851ec932-482a-43c0-a100-ee8378bb527e-reloader\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879459 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgjwb\" (UniqueName: \"kubernetes.io/projected/49ae2f6f-61f5-4577-ad9f-cce3678795ef-kube-api-access-kgjwb\") pod \"frr-k8s-webhook-server-6998585d5-4mmlv\" (UID: \"49ae2f6f-61f5-4577-ad9f-cce3678795ef\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879478 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/851ec932-482a-43c0-a100-ee8378bb527e-frr-sockets\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879512 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5nrm\" (UniqueName: \"kubernetes.io/projected/269e623a-f673-45c5-8377-29b4d98a8778-kube-api-access-p5nrm\") pod \"controller-6c7b4b5f48-brpp6\" (UID: \"269e623a-f673-45c5-8377-29b4d98a8778\") " pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879534 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/851ec932-482a-43c0-a100-ee8378bb527e-frr-startup\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879557 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/724aef58-6386-4f8e-bfaf-231b5dfcea9b-memberlist\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879576 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/851ec932-482a-43c0-a100-ee8378bb527e-metrics\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879591 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/851ec932-482a-43c0-a100-ee8378bb527e-metrics-certs\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879637 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/724aef58-6386-4f8e-bfaf-231b5dfcea9b-metrics-certs\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879660 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/851ec932-482a-43c0-a100-ee8378bb527e-frr-conf\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879699 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/724aef58-6386-4f8e-bfaf-231b5dfcea9b-metallb-excludel2\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879727 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs69g\" (UniqueName: \"kubernetes.io/projected/724aef58-6386-4f8e-bfaf-231b5dfcea9b-kube-api-access-bs69g\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879753 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njvg6\" (UniqueName: \"kubernetes.io/projected/851ec932-482a-43c0-a100-ee8378bb527e-kube-api-access-njvg6\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.879779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/269e623a-f673-45c5-8377-29b4d98a8778-metrics-certs\") pod \"controller-6c7b4b5f48-brpp6\" (UID: \"269e623a-f673-45c5-8377-29b4d98a8778\") " pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.881158 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/851ec932-482a-43c0-a100-ee8378bb527e-reloader\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.881671 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/851ec932-482a-43c0-a100-ee8378bb527e-frr-startup\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.881782 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/851ec932-482a-43c0-a100-ee8378bb527e-frr-conf\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.882091 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/851ec932-482a-43c0-a100-ee8378bb527e-metrics\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.882214 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/851ec932-482a-43c0-a100-ee8378bb527e-frr-sockets\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.889565 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49ae2f6f-61f5-4577-ad9f-cce3678795ef-cert\") pod \"frr-k8s-webhook-server-6998585d5-4mmlv\" (UID: \"49ae2f6f-61f5-4577-ad9f-cce3678795ef\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.907497 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/851ec932-482a-43c0-a100-ee8378bb527e-metrics-certs\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.927389 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgjwb\" (UniqueName: \"kubernetes.io/projected/49ae2f6f-61f5-4577-ad9f-cce3678795ef-kube-api-access-kgjwb\") pod \"frr-k8s-webhook-server-6998585d5-4mmlv\" (UID: \"49ae2f6f-61f5-4577-ad9f-cce3678795ef\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.933924 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njvg6\" (UniqueName: \"kubernetes.io/projected/851ec932-482a-43c0-a100-ee8378bb527e-kube-api-access-njvg6\") pod \"frr-k8s-z9zl6\" (UID: \"851ec932-482a-43c0-a100-ee8378bb527e\") " pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.981085 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5nrm\" (UniqueName: \"kubernetes.io/projected/269e623a-f673-45c5-8377-29b4d98a8778-kube-api-access-p5nrm\") pod \"controller-6c7b4b5f48-brpp6\" (UID: \"269e623a-f673-45c5-8377-29b4d98a8778\") " pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.981152 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/724aef58-6386-4f8e-bfaf-231b5dfcea9b-memberlist\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.981217 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/724aef58-6386-4f8e-bfaf-231b5dfcea9b-metrics-certs\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.981252 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/724aef58-6386-4f8e-bfaf-231b5dfcea9b-metallb-excludel2\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.981278 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs69g\" (UniqueName: \"kubernetes.io/projected/724aef58-6386-4f8e-bfaf-231b5dfcea9b-kube-api-access-bs69g\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.981320 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/269e623a-f673-45c5-8377-29b4d98a8778-metrics-certs\") pod \"controller-6c7b4b5f48-brpp6\" (UID: \"269e623a-f673-45c5-8377-29b4d98a8778\") " pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.981359 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/269e623a-f673-45c5-8377-29b4d98a8778-cert\") pod \"controller-6c7b4b5f48-brpp6\" (UID: \"269e623a-f673-45c5-8377-29b4d98a8778\") " pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:04 crc kubenswrapper[4813]: E1125 10:47:04.981567 4813 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 10:47:04 crc kubenswrapper[4813]: E1125 10:47:04.981633 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/724aef58-6386-4f8e-bfaf-231b5dfcea9b-memberlist podName:724aef58-6386-4f8e-bfaf-231b5dfcea9b nodeName:}" failed. No retries permitted until 2025-11-25 10:47:05.481613026 +0000 UTC m=+922.611322912 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/724aef58-6386-4f8e-bfaf-231b5dfcea9b-memberlist") pod "speaker-gwnrv" (UID: "724aef58-6386-4f8e-bfaf-231b5dfcea9b") : secret "metallb-memberlist" not found Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.984364 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/724aef58-6386-4f8e-bfaf-231b5dfcea9b-metallb-excludel2\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.987816 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/269e623a-f673-45c5-8377-29b4d98a8778-metrics-certs\") pod \"controller-6c7b4b5f48-brpp6\" (UID: \"269e623a-f673-45c5-8377-29b4d98a8778\") " pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.988579 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.996058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/724aef58-6386-4f8e-bfaf-231b5dfcea9b-metrics-certs\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:04 crc kubenswrapper[4813]: I1125 10:47:04.996228 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/269e623a-f673-45c5-8377-29b4d98a8778-cert\") pod \"controller-6c7b4b5f48-brpp6\" (UID: \"269e623a-f673-45c5-8377-29b4d98a8778\") " pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:05 crc kubenswrapper[4813]: I1125 10:47:05.002221 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5nrm\" (UniqueName: \"kubernetes.io/projected/269e623a-f673-45c5-8377-29b4d98a8778-kube-api-access-p5nrm\") pod \"controller-6c7b4b5f48-brpp6\" (UID: \"269e623a-f673-45c5-8377-29b4d98a8778\") " pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:05 crc kubenswrapper[4813]: I1125 10:47:05.003125 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs69g\" (UniqueName: \"kubernetes.io/projected/724aef58-6386-4f8e-bfaf-231b5dfcea9b-kube-api-access-bs69g\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:05 crc kubenswrapper[4813]: I1125 10:47:05.036322 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:05 crc kubenswrapper[4813]: I1125 10:47:05.211418 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" Nov 25 10:47:05 crc kubenswrapper[4813]: I1125 10:47:05.235150 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:05 crc kubenswrapper[4813]: I1125 10:47:05.460203 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv"] Nov 25 10:47:05 crc kubenswrapper[4813]: W1125 10:47:05.467806 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49ae2f6f_61f5_4577_ad9f_cce3678795ef.slice/crio-9aa42fbd2845a5637de167a68682e2a70f1cd2534ebe9e658b5d2a389817e0c3 WatchSource:0}: Error finding container 9aa42fbd2845a5637de167a68682e2a70f1cd2534ebe9e658b5d2a389817e0c3: Status 404 returned error can't find the container with id 9aa42fbd2845a5637de167a68682e2a70f1cd2534ebe9e658b5d2a389817e0c3 Nov 25 10:47:05 crc kubenswrapper[4813]: I1125 10:47:05.491979 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/724aef58-6386-4f8e-bfaf-231b5dfcea9b-memberlist\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:05 crc kubenswrapper[4813]: E1125 10:47:05.492261 4813 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 10:47:05 crc kubenswrapper[4813]: E1125 10:47:05.492378 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/724aef58-6386-4f8e-bfaf-231b5dfcea9b-memberlist podName:724aef58-6386-4f8e-bfaf-231b5dfcea9b nodeName:}" failed. No retries permitted until 2025-11-25 10:47:06.492352816 +0000 UTC m=+923.622062712 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/724aef58-6386-4f8e-bfaf-231b5dfcea9b-memberlist") pod "speaker-gwnrv" (UID: "724aef58-6386-4f8e-bfaf-231b5dfcea9b") : secret "metallb-memberlist" not found Nov 25 10:47:05 crc kubenswrapper[4813]: I1125 10:47:05.512659 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-brpp6"] Nov 25 10:47:05 crc kubenswrapper[4813]: W1125 10:47:05.521996 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod269e623a_f673_45c5_8377_29b4d98a8778.slice/crio-ff5fcfbec9c4f4f552b52d28f2655d08e6818965f255a967e24bc8e7761fc8bf WatchSource:0}: Error finding container ff5fcfbec9c4f4f552b52d28f2655d08e6818965f255a967e24bc8e7761fc8bf: Status 404 returned error can't find the container with id ff5fcfbec9c4f4f552b52d28f2655d08e6818965f255a967e24bc8e7761fc8bf Nov 25 10:47:06 crc kubenswrapper[4813]: I1125 10:47:06.372174 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-brpp6" event={"ID":"269e623a-f673-45c5-8377-29b4d98a8778","Type":"ContainerStarted","Data":"ce7444b67ff840ca337f96005113faf5a4fdc7b40f81aaecb2254eb39ca27ce8"} Nov 25 10:47:06 crc kubenswrapper[4813]: I1125 10:47:06.373120 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-brpp6" event={"ID":"269e623a-f673-45c5-8377-29b4d98a8778","Type":"ContainerStarted","Data":"2675a37b8de5239966de7762c95ff55a3dbb5f0467791e3281cf2873e5480448"} Nov 25 10:47:06 crc kubenswrapper[4813]: I1125 10:47:06.373150 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:06 crc kubenswrapper[4813]: I1125 10:47:06.373165 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-brpp6" event={"ID":"269e623a-f673-45c5-8377-29b4d98a8778","Type":"ContainerStarted","Data":"ff5fcfbec9c4f4f552b52d28f2655d08e6818965f255a967e24bc8e7761fc8bf"} Nov 25 10:47:06 crc kubenswrapper[4813]: I1125 10:47:06.376493 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" event={"ID":"49ae2f6f-61f5-4577-ad9f-cce3678795ef","Type":"ContainerStarted","Data":"9aa42fbd2845a5637de167a68682e2a70f1cd2534ebe9e658b5d2a389817e0c3"} Nov 25 10:47:06 crc kubenswrapper[4813]: I1125 10:47:06.378821 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9zl6" event={"ID":"851ec932-482a-43c0-a100-ee8378bb527e","Type":"ContainerStarted","Data":"6d983c0b3b66d6bcc6b88a340fc116b499a319865eeb069307469adb719541ce"} Nov 25 10:47:06 crc kubenswrapper[4813]: I1125 10:47:06.413860 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-brpp6" podStartSLOduration=2.413832474 podStartE2EDuration="2.413832474s" podCreationTimestamp="2025-11-25 10:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:47:06.408345328 +0000 UTC m=+923.538055234" watchObservedRunningTime="2025-11-25 10:47:06.413832474 +0000 UTC m=+923.543542360" Nov 25 10:47:06 crc kubenswrapper[4813]: I1125 10:47:06.507088 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/724aef58-6386-4f8e-bfaf-231b5dfcea9b-memberlist\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:06 crc kubenswrapper[4813]: I1125 10:47:06.517853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/724aef58-6386-4f8e-bfaf-231b5dfcea9b-memberlist\") pod \"speaker-gwnrv\" (UID: \"724aef58-6386-4f8e-bfaf-231b5dfcea9b\") " pod="metallb-system/speaker-gwnrv" Nov 25 10:47:06 crc kubenswrapper[4813]: I1125 10:47:06.524365 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gwnrv" Nov 25 10:47:06 crc kubenswrapper[4813]: W1125 10:47:06.551148 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod724aef58_6386_4f8e_bfaf_231b5dfcea9b.slice/crio-138fe2ea2c1480aca425d80e5aa051eac900f3072209a9e9f8bde87dea9a4452 WatchSource:0}: Error finding container 138fe2ea2c1480aca425d80e5aa051eac900f3072209a9e9f8bde87dea9a4452: Status 404 returned error can't find the container with id 138fe2ea2c1480aca425d80e5aa051eac900f3072209a9e9f8bde87dea9a4452 Nov 25 10:47:07 crc kubenswrapper[4813]: I1125 10:47:07.395522 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gwnrv" event={"ID":"724aef58-6386-4f8e-bfaf-231b5dfcea9b","Type":"ContainerStarted","Data":"d712750002c3b4fd914672b5d1b0a3157cf4d512701f55e6de0ee809e925139b"} Nov 25 10:47:07 crc kubenswrapper[4813]: I1125 10:47:07.396174 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gwnrv" event={"ID":"724aef58-6386-4f8e-bfaf-231b5dfcea9b","Type":"ContainerStarted","Data":"138fe2ea2c1480aca425d80e5aa051eac900f3072209a9e9f8bde87dea9a4452"} Nov 25 10:47:08 crc kubenswrapper[4813]: I1125 10:47:08.409632 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gwnrv" event={"ID":"724aef58-6386-4f8e-bfaf-231b5dfcea9b","Type":"ContainerStarted","Data":"b685191faf7c0d60c8b4b7f3b366aa2627f68844b49936a7a8c4349868765451"} Nov 25 10:47:08 crc kubenswrapper[4813]: I1125 10:47:08.409807 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gwnrv" Nov 25 10:47:08 crc kubenswrapper[4813]: I1125 10:47:08.440021 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gwnrv" podStartSLOduration=4.439996635 podStartE2EDuration="4.439996635s" podCreationTimestamp="2025-11-25 10:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:47:08.4380588 +0000 UTC m=+925.567768686" watchObservedRunningTime="2025-11-25 10:47:08.439996635 +0000 UTC m=+925.569706521" Nov 25 10:47:15 crc kubenswrapper[4813]: I1125 10:47:15.044394 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-brpp6" Nov 25 10:47:15 crc kubenswrapper[4813]: I1125 10:47:15.475872 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" event={"ID":"49ae2f6f-61f5-4577-ad9f-cce3678795ef","Type":"ContainerStarted","Data":"e20e54cf9d6148a02b3981e19865f1c2d75495fbbff37d92fabd98f09f22f32e"} Nov 25 10:47:15 crc kubenswrapper[4813]: I1125 10:47:15.476044 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" Nov 25 10:47:15 crc kubenswrapper[4813]: I1125 10:47:15.478035 4813 generic.go:334] "Generic (PLEG): container finished" podID="851ec932-482a-43c0-a100-ee8378bb527e" containerID="e83b0d697842dba88f8c997109924b8f5c2d4d04b64d89bf6ecd26481a9671a7" exitCode=0 Nov 25 10:47:15 crc kubenswrapper[4813]: I1125 10:47:15.478081 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9zl6" event={"ID":"851ec932-482a-43c0-a100-ee8378bb527e","Type":"ContainerDied","Data":"e83b0d697842dba88f8c997109924b8f5c2d4d04b64d89bf6ecd26481a9671a7"} Nov 25 10:47:15 crc kubenswrapper[4813]: I1125 10:47:15.493075 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" podStartSLOduration=2.646993467 podStartE2EDuration="11.493049694s" podCreationTimestamp="2025-11-25 10:47:04 +0000 UTC" firstStartedPulling="2025-11-25 10:47:05.470569967 +0000 UTC m=+922.600279853" lastFinishedPulling="2025-11-25 10:47:14.316626194 +0000 UTC m=+931.446336080" observedRunningTime="2025-11-25 10:47:15.491739967 +0000 UTC m=+932.621449863" watchObservedRunningTime="2025-11-25 10:47:15.493049694 +0000 UTC m=+932.622759590" Nov 25 10:47:16 crc kubenswrapper[4813]: I1125 10:47:16.490944 4813 generic.go:334] "Generic (PLEG): container finished" podID="851ec932-482a-43c0-a100-ee8378bb527e" containerID="6b1e486dbe70dae2b0d46f41a40bcde6ef0d0ebfcdf23e22e086a47567d40017" exitCode=0 Nov 25 10:47:16 crc kubenswrapper[4813]: I1125 10:47:16.491038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9zl6" event={"ID":"851ec932-482a-43c0-a100-ee8378bb527e","Type":"ContainerDied","Data":"6b1e486dbe70dae2b0d46f41a40bcde6ef0d0ebfcdf23e22e086a47567d40017"} Nov 25 10:47:16 crc kubenswrapper[4813]: I1125 10:47:16.529183 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gwnrv" Nov 25 10:47:17 crc kubenswrapper[4813]: I1125 10:47:17.500336 4813 generic.go:334] "Generic (PLEG): container finished" podID="851ec932-482a-43c0-a100-ee8378bb527e" containerID="d45dac52574fe7ce5f3899946617f5cae4d7ee5c110bfcdef6155a8e2961229d" exitCode=0 Nov 25 10:47:17 crc kubenswrapper[4813]: I1125 10:47:17.500404 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9zl6" event={"ID":"851ec932-482a-43c0-a100-ee8378bb527e","Type":"ContainerDied","Data":"d45dac52574fe7ce5f3899946617f5cae4d7ee5c110bfcdef6155a8e2961229d"} Nov 25 10:47:18 crc kubenswrapper[4813]: I1125 10:47:18.511110 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9zl6" event={"ID":"851ec932-482a-43c0-a100-ee8378bb527e","Type":"ContainerStarted","Data":"d1abad2acf06ddaa1988d1e6c8ad0b66898f9db6b971e6bc5c930a271040d9c8"} Nov 25 10:47:18 crc kubenswrapper[4813]: I1125 10:47:18.511980 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9zl6" event={"ID":"851ec932-482a-43c0-a100-ee8378bb527e","Type":"ContainerStarted","Data":"6909d8cf8818a16c4f1a48ee082b2f664a4bcffa549cdc3e9c4b3b514c617031"} Nov 25 10:47:18 crc kubenswrapper[4813]: I1125 10:47:18.512054 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9zl6" event={"ID":"851ec932-482a-43c0-a100-ee8378bb527e","Type":"ContainerStarted","Data":"435d5da1733f1178e092453336a3b8f4bc6dc9bd6c6a988f7f8246fbfa080489"} Nov 25 10:47:19 crc kubenswrapper[4813]: I1125 10:47:19.528103 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9zl6" event={"ID":"851ec932-482a-43c0-a100-ee8378bb527e","Type":"ContainerStarted","Data":"a2a97c35fab7f6eb2fe3d92bda66dfb41fe43b3e54846dac452534461ee0c4ea"} Nov 25 10:47:20 crc kubenswrapper[4813]: I1125 10:47:20.543161 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9zl6" event={"ID":"851ec932-482a-43c0-a100-ee8378bb527e","Type":"ContainerStarted","Data":"5aa3bbf7ddde151085365058601e268cbb34bae12c903eca829b2116866b1db1"} Nov 25 10:47:21 crc kubenswrapper[4813]: I1125 10:47:21.553725 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z9zl6" event={"ID":"851ec932-482a-43c0-a100-ee8378bb527e","Type":"ContainerStarted","Data":"334d97e666ec295bc02e3e1c65f42dd613ceafec94ddc717f5a19f3eb1299c02"} Nov 25 10:47:21 crc kubenswrapper[4813]: I1125 10:47:21.554192 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:21 crc kubenswrapper[4813]: I1125 10:47:21.578058 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-z9zl6" podStartSLOduration=8.903728889 podStartE2EDuration="17.578038967s" podCreationTimestamp="2025-11-25 10:47:04 +0000 UTC" firstStartedPulling="2025-11-25 10:47:05.624801608 +0000 UTC m=+922.754511504" lastFinishedPulling="2025-11-25 10:47:14.299111696 +0000 UTC m=+931.428821582" observedRunningTime="2025-11-25 10:47:21.576016029 +0000 UTC m=+938.705725935" watchObservedRunningTime="2025-11-25 10:47:21.578038967 +0000 UTC m=+938.707748853" Nov 25 10:47:22 crc kubenswrapper[4813]: I1125 10:47:22.022112 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x6h2j"] Nov 25 10:47:22 crc kubenswrapper[4813]: I1125 10:47:22.022956 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x6h2j" Nov 25 10:47:22 crc kubenswrapper[4813]: I1125 10:47:22.025432 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 10:47:22 crc kubenswrapper[4813]: I1125 10:47:22.025797 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7zdfj" Nov 25 10:47:22 crc kubenswrapper[4813]: I1125 10:47:22.025859 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 10:47:22 crc kubenswrapper[4813]: I1125 10:47:22.037946 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x6h2j"] Nov 25 10:47:22 crc kubenswrapper[4813]: I1125 10:47:22.116354 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkrtd\" (UniqueName: \"kubernetes.io/projected/f912baf1-3c8b-4680-bd2b-8f4074eff6d1-kube-api-access-pkrtd\") pod \"openstack-operator-index-x6h2j\" (UID: \"f912baf1-3c8b-4680-bd2b-8f4074eff6d1\") " pod="openstack-operators/openstack-operator-index-x6h2j" Nov 25 10:47:22 crc kubenswrapper[4813]: I1125 10:47:22.217334 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkrtd\" (UniqueName: \"kubernetes.io/projected/f912baf1-3c8b-4680-bd2b-8f4074eff6d1-kube-api-access-pkrtd\") pod \"openstack-operator-index-x6h2j\" (UID: \"f912baf1-3c8b-4680-bd2b-8f4074eff6d1\") " pod="openstack-operators/openstack-operator-index-x6h2j" Nov 25 10:47:22 crc kubenswrapper[4813]: I1125 10:47:22.235260 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkrtd\" (UniqueName: \"kubernetes.io/projected/f912baf1-3c8b-4680-bd2b-8f4074eff6d1-kube-api-access-pkrtd\") pod \"openstack-operator-index-x6h2j\" (UID: \"f912baf1-3c8b-4680-bd2b-8f4074eff6d1\") " pod="openstack-operators/openstack-operator-index-x6h2j" Nov 25 10:47:22 crc kubenswrapper[4813]: I1125 10:47:22.341040 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x6h2j" Nov 25 10:47:22 crc kubenswrapper[4813]: I1125 10:47:22.569830 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x6h2j"] Nov 25 10:47:22 crc kubenswrapper[4813]: W1125 10:47:22.579724 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf912baf1_3c8b_4680_bd2b_8f4074eff6d1.slice/crio-f7816b131e2e1452abbbce11311988c6fb236177c3c50516c46c0e9bfff1f91d WatchSource:0}: Error finding container f7816b131e2e1452abbbce11311988c6fb236177c3c50516c46c0e9bfff1f91d: Status 404 returned error can't find the container with id f7816b131e2e1452abbbce11311988c6fb236177c3c50516c46c0e9bfff1f91d Nov 25 10:47:23 crc kubenswrapper[4813]: I1125 10:47:23.572953 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x6h2j" event={"ID":"f912baf1-3c8b-4680-bd2b-8f4074eff6d1","Type":"ContainerStarted","Data":"f7816b131e2e1452abbbce11311988c6fb236177c3c50516c46c0e9bfff1f91d"} Nov 25 10:47:25 crc kubenswrapper[4813]: I1125 10:47:25.218173 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mmlv" Nov 25 10:47:25 crc kubenswrapper[4813]: I1125 10:47:25.237723 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:25 crc kubenswrapper[4813]: I1125 10:47:25.280068 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:26 crc kubenswrapper[4813]: I1125 10:47:26.017388 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x6h2j"] Nov 25 10:47:26 crc kubenswrapper[4813]: I1125 10:47:26.421585 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nkcj2"] Nov 25 10:47:26 crc kubenswrapper[4813]: I1125 10:47:26.422622 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nkcj2" Nov 25 10:47:26 crc kubenswrapper[4813]: I1125 10:47:26.445971 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nkcj2"] Nov 25 10:47:26 crc kubenswrapper[4813]: I1125 10:47:26.594593 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj4ws\" (UniqueName: \"kubernetes.io/projected/da5dd33d-c08b-45ba-af6c-86748ecaf7b0-kube-api-access-lj4ws\") pod \"openstack-operator-index-nkcj2\" (UID: \"da5dd33d-c08b-45ba-af6c-86748ecaf7b0\") " pod="openstack-operators/openstack-operator-index-nkcj2" Nov 25 10:47:26 crc kubenswrapper[4813]: I1125 10:47:26.696008 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj4ws\" (UniqueName: \"kubernetes.io/projected/da5dd33d-c08b-45ba-af6c-86748ecaf7b0-kube-api-access-lj4ws\") pod \"openstack-operator-index-nkcj2\" (UID: \"da5dd33d-c08b-45ba-af6c-86748ecaf7b0\") " pod="openstack-operators/openstack-operator-index-nkcj2" Nov 25 10:47:26 crc kubenswrapper[4813]: I1125 10:47:26.713755 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj4ws\" (UniqueName: \"kubernetes.io/projected/da5dd33d-c08b-45ba-af6c-86748ecaf7b0-kube-api-access-lj4ws\") pod \"openstack-operator-index-nkcj2\" (UID: \"da5dd33d-c08b-45ba-af6c-86748ecaf7b0\") " pod="openstack-operators/openstack-operator-index-nkcj2" Nov 25 10:47:26 crc kubenswrapper[4813]: I1125 10:47:26.769195 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nkcj2" Nov 25 10:47:27 crc kubenswrapper[4813]: I1125 10:47:27.189019 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nkcj2"] Nov 25 10:47:27 crc kubenswrapper[4813]: W1125 10:47:27.202176 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda5dd33d_c08b_45ba_af6c_86748ecaf7b0.slice/crio-1fa937044b89b757e40aca38982bd13cc2a762477c26f20feb8925f36b0069b4 WatchSource:0}: Error finding container 1fa937044b89b757e40aca38982bd13cc2a762477c26f20feb8925f36b0069b4: Status 404 returned error can't find the container with id 1fa937044b89b757e40aca38982bd13cc2a762477c26f20feb8925f36b0069b4 Nov 25 10:47:27 crc kubenswrapper[4813]: I1125 10:47:27.598267 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nkcj2" event={"ID":"da5dd33d-c08b-45ba-af6c-86748ecaf7b0","Type":"ContainerStarted","Data":"1fa937044b89b757e40aca38982bd13cc2a762477c26f20feb8925f36b0069b4"} Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.641409 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7wz57"] Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.644560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.644947 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wz57"] Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.827488 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbsb4\" (UniqueName: \"kubernetes.io/projected/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-kube-api-access-jbsb4\") pod \"certified-operators-7wz57\" (UID: \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\") " pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.827620 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-utilities\") pod \"certified-operators-7wz57\" (UID: \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\") " pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.827654 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-catalog-content\") pod \"certified-operators-7wz57\" (UID: \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\") " pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.929179 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbsb4\" (UniqueName: \"kubernetes.io/projected/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-kube-api-access-jbsb4\") pod \"certified-operators-7wz57\" (UID: \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\") " pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.929243 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-utilities\") pod \"certified-operators-7wz57\" (UID: \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\") " pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.929259 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-catalog-content\") pod \"certified-operators-7wz57\" (UID: \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\") " pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.929841 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-catalog-content\") pod \"certified-operators-7wz57\" (UID: \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\") " pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.930098 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-utilities\") pod \"certified-operators-7wz57\" (UID: \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\") " pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.950818 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbsb4\" (UniqueName: \"kubernetes.io/projected/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-kube-api-access-jbsb4\") pod \"certified-operators-7wz57\" (UID: \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\") " pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:33 crc kubenswrapper[4813]: I1125 10:47:33.977594 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:34 crc kubenswrapper[4813]: I1125 10:47:34.182763 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wz57"] Nov 25 10:47:34 crc kubenswrapper[4813]: W1125 10:47:34.187542 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe425e3_1b9f_4502_a5b3_d0dcacfb6e23.slice/crio-7b3cb9d85c3613308a94b5f54e234ba6d553f4b36a44827225b44438c0f66828 WatchSource:0}: Error finding container 7b3cb9d85c3613308a94b5f54e234ba6d553f4b36a44827225b44438c0f66828: Status 404 returned error can't find the container with id 7b3cb9d85c3613308a94b5f54e234ba6d553f4b36a44827225b44438c0f66828 Nov 25 10:47:34 crc kubenswrapper[4813]: I1125 10:47:34.655651 4813 generic.go:334] "Generic (PLEG): container finished" podID="1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" containerID="8781aaec7fe29ba9ac041ad95434f3c53d1617e56f72b4c68a957da330ab14f9" exitCode=0 Nov 25 10:47:34 crc kubenswrapper[4813]: I1125 10:47:34.655743 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wz57" event={"ID":"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23","Type":"ContainerDied","Data":"8781aaec7fe29ba9ac041ad95434f3c53d1617e56f72b4c68a957da330ab14f9"} Nov 25 10:47:34 crc kubenswrapper[4813]: I1125 10:47:34.655935 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wz57" event={"ID":"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23","Type":"ContainerStarted","Data":"7b3cb9d85c3613308a94b5f54e234ba6d553f4b36a44827225b44438c0f66828"} Nov 25 10:47:35 crc kubenswrapper[4813]: I1125 10:47:35.239608 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-z9zl6" Nov 25 10:47:38 crc kubenswrapper[4813]: I1125 10:47:38.687424 4813 generic.go:334] "Generic (PLEG): container finished" podID="1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" containerID="48d9f5816e3cb1c1799fe3ca8a0616af15484003598482d03292f592da5f534f" exitCode=0 Nov 25 10:47:38 crc kubenswrapper[4813]: I1125 10:47:38.687494 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wz57" event={"ID":"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23","Type":"ContainerDied","Data":"48d9f5816e3cb1c1799fe3ca8a0616af15484003598482d03292f592da5f534f"} Nov 25 10:47:38 crc kubenswrapper[4813]: I1125 10:47:38.690942 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x6h2j" event={"ID":"f912baf1-3c8b-4680-bd2b-8f4074eff6d1","Type":"ContainerStarted","Data":"930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6"} Nov 25 10:47:38 crc kubenswrapper[4813]: I1125 10:47:38.691030 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-x6h2j" podUID="f912baf1-3c8b-4680-bd2b-8f4074eff6d1" containerName="registry-server" containerID="cri-o://930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6" gracePeriod=2 Nov 25 10:47:38 crc kubenswrapper[4813]: I1125 10:47:38.694699 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nkcj2" event={"ID":"da5dd33d-c08b-45ba-af6c-86748ecaf7b0","Type":"ContainerStarted","Data":"0c5cb5c44997692b6083eca3d0e18e3eb770c50b40c24a46d7625c919ea79a9e"} Nov 25 10:47:38 crc kubenswrapper[4813]: I1125 10:47:38.746512 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x6h2j" podStartSLOduration=1.4536916930000001 podStartE2EDuration="16.746476674s" podCreationTimestamp="2025-11-25 10:47:22 +0000 UTC" firstStartedPulling="2025-11-25 10:47:22.581928617 +0000 UTC m=+939.711638503" lastFinishedPulling="2025-11-25 10:47:37.874713598 +0000 UTC m=+955.004423484" observedRunningTime="2025-11-25 10:47:38.739821845 +0000 UTC m=+955.869531751" watchObservedRunningTime="2025-11-25 10:47:38.746476674 +0000 UTC m=+955.876186560" Nov 25 10:47:38 crc kubenswrapper[4813]: I1125 10:47:38.755581 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nkcj2" podStartSLOduration=2.012822631 podStartE2EDuration="12.755562112s" podCreationTimestamp="2025-11-25 10:47:26 +0000 UTC" firstStartedPulling="2025-11-25 10:47:27.204594081 +0000 UTC m=+944.334303977" lastFinishedPulling="2025-11-25 10:47:37.947333572 +0000 UTC m=+955.077043458" observedRunningTime="2025-11-25 10:47:38.752965009 +0000 UTC m=+955.882674905" watchObservedRunningTime="2025-11-25 10:47:38.755562112 +0000 UTC m=+955.885271998" Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.246080 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x6h2j" Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.413187 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkrtd\" (UniqueName: \"kubernetes.io/projected/f912baf1-3c8b-4680-bd2b-8f4074eff6d1-kube-api-access-pkrtd\") pod \"f912baf1-3c8b-4680-bd2b-8f4074eff6d1\" (UID: \"f912baf1-3c8b-4680-bd2b-8f4074eff6d1\") " Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.425405 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f912baf1-3c8b-4680-bd2b-8f4074eff6d1-kube-api-access-pkrtd" (OuterVolumeSpecName: "kube-api-access-pkrtd") pod "f912baf1-3c8b-4680-bd2b-8f4074eff6d1" (UID: "f912baf1-3c8b-4680-bd2b-8f4074eff6d1"). InnerVolumeSpecName "kube-api-access-pkrtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.516517 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkrtd\" (UniqueName: \"kubernetes.io/projected/f912baf1-3c8b-4680-bd2b-8f4074eff6d1-kube-api-access-pkrtd\") on node \"crc\" DevicePath \"\"" Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.704655 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wz57" event={"ID":"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23","Type":"ContainerStarted","Data":"c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7"} Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.707655 4813 generic.go:334] "Generic (PLEG): container finished" podID="f912baf1-3c8b-4680-bd2b-8f4074eff6d1" containerID="930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6" exitCode=0 Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.707741 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x6h2j" event={"ID":"f912baf1-3c8b-4680-bd2b-8f4074eff6d1","Type":"ContainerDied","Data":"930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6"} Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.707809 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x6h2j" event={"ID":"f912baf1-3c8b-4680-bd2b-8f4074eff6d1","Type":"ContainerDied","Data":"f7816b131e2e1452abbbce11311988c6fb236177c3c50516c46c0e9bfff1f91d"} Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.707833 4813 scope.go:117] "RemoveContainer" containerID="930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6" Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.708081 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x6h2j" Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.726504 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7wz57" podStartSLOduration=2.377850646 podStartE2EDuration="6.726446315s" podCreationTimestamp="2025-11-25 10:47:33 +0000 UTC" firstStartedPulling="2025-11-25 10:47:34.933636246 +0000 UTC m=+952.063346132" lastFinishedPulling="2025-11-25 10:47:39.282231915 +0000 UTC m=+956.411941801" observedRunningTime="2025-11-25 10:47:39.725367864 +0000 UTC m=+956.855077770" watchObservedRunningTime="2025-11-25 10:47:39.726446315 +0000 UTC m=+956.856156211" Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.727167 4813 scope.go:117] "RemoveContainer" containerID="930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6" Nov 25 10:47:39 crc kubenswrapper[4813]: E1125 10:47:39.727760 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6\": container with ID starting with 930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6 not found: ID does not exist" containerID="930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6" Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.727802 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6"} err="failed to get container status \"930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6\": rpc error: code = NotFound desc = could not find container \"930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6\": container with ID starting with 930b4f08ebf0c3473a19969ab5cb5f0027bb8eb1b328773e3514c2a45e5531c6 not found: ID does not exist" Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.741285 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x6h2j"] Nov 25 10:47:39 crc kubenswrapper[4813]: I1125 10:47:39.745922 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-x6h2j"] Nov 25 10:47:41 crc kubenswrapper[4813]: I1125 10:47:41.632854 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f912baf1-3c8b-4680-bd2b-8f4074eff6d1" path="/var/lib/kubelet/pods/f912baf1-3c8b-4680-bd2b-8f4074eff6d1/volumes" Nov 25 10:47:43 crc kubenswrapper[4813]: I1125 10:47:43.978299 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:43 crc kubenswrapper[4813]: I1125 10:47:43.978368 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:44 crc kubenswrapper[4813]: I1125 10:47:44.033157 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:44 crc kubenswrapper[4813]: I1125 10:47:44.792447 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:46 crc kubenswrapper[4813]: I1125 10:47:46.416403 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7wz57"] Nov 25 10:47:46 crc kubenswrapper[4813]: I1125 10:47:46.754514 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7wz57" podUID="1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" containerName="registry-server" containerID="cri-o://c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7" gracePeriod=2 Nov 25 10:47:46 crc kubenswrapper[4813]: I1125 10:47:46.769561 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nkcj2" Nov 25 10:47:46 crc kubenswrapper[4813]: I1125 10:47:46.769618 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nkcj2" Nov 25 10:47:46 crc kubenswrapper[4813]: I1125 10:47:46.795761 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nkcj2" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.170746 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.361134 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-utilities\") pod \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\" (UID: \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\") " Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.361223 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbsb4\" (UniqueName: \"kubernetes.io/projected/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-kube-api-access-jbsb4\") pod \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\" (UID: \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\") " Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.361319 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-catalog-content\") pod \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\" (UID: \"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23\") " Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.362550 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-utilities" (OuterVolumeSpecName: "utilities") pod "1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" (UID: "1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.367856 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-kube-api-access-jbsb4" (OuterVolumeSpecName: "kube-api-access-jbsb4") pod "1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" (UID: "1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23"). InnerVolumeSpecName "kube-api-access-jbsb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.414238 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" (UID: "1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.463130 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.463170 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.463182 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbsb4\" (UniqueName: \"kubernetes.io/projected/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23-kube-api-access-jbsb4\") on node \"crc\" DevicePath \"\"" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.762354 4813 generic.go:334] "Generic (PLEG): container finished" podID="1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" containerID="c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7" exitCode=0 Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.762416 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wz57" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.762431 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wz57" event={"ID":"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23","Type":"ContainerDied","Data":"c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7"} Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.762474 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wz57" event={"ID":"1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23","Type":"ContainerDied","Data":"7b3cb9d85c3613308a94b5f54e234ba6d553f4b36a44827225b44438c0f66828"} Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.762516 4813 scope.go:117] "RemoveContainer" containerID="c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.783481 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7wz57"] Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.784651 4813 scope.go:117] "RemoveContainer" containerID="48d9f5816e3cb1c1799fe3ca8a0616af15484003598482d03292f592da5f534f" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.787046 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7wz57"] Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.796994 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nkcj2" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.803577 4813 scope.go:117] "RemoveContainer" containerID="8781aaec7fe29ba9ac041ad95434f3c53d1617e56f72b4c68a957da330ab14f9" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.825202 4813 scope.go:117] "RemoveContainer" containerID="c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7" Nov 25 10:47:47 crc kubenswrapper[4813]: E1125 10:47:47.826218 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7\": container with ID starting with c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7 not found: ID does not exist" containerID="c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.826275 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7"} err="failed to get container status \"c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7\": rpc error: code = NotFound desc = could not find container \"c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7\": container with ID starting with c6ad9d19ab2ed5983a4dbfee82ccefb8a6931ff35cd8eb068b270c95cf1c67f7 not found: ID does not exist" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.826306 4813 scope.go:117] "RemoveContainer" containerID="48d9f5816e3cb1c1799fe3ca8a0616af15484003598482d03292f592da5f534f" Nov 25 10:47:47 crc kubenswrapper[4813]: E1125 10:47:47.827496 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d9f5816e3cb1c1799fe3ca8a0616af15484003598482d03292f592da5f534f\": container with ID starting with 48d9f5816e3cb1c1799fe3ca8a0616af15484003598482d03292f592da5f534f not found: ID does not exist" containerID="48d9f5816e3cb1c1799fe3ca8a0616af15484003598482d03292f592da5f534f" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.827523 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d9f5816e3cb1c1799fe3ca8a0616af15484003598482d03292f592da5f534f"} err="failed to get container status \"48d9f5816e3cb1c1799fe3ca8a0616af15484003598482d03292f592da5f534f\": rpc error: code = NotFound desc = could not find container \"48d9f5816e3cb1c1799fe3ca8a0616af15484003598482d03292f592da5f534f\": container with ID starting with 48d9f5816e3cb1c1799fe3ca8a0616af15484003598482d03292f592da5f534f not found: ID does not exist" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.827538 4813 scope.go:117] "RemoveContainer" containerID="8781aaec7fe29ba9ac041ad95434f3c53d1617e56f72b4c68a957da330ab14f9" Nov 25 10:47:47 crc kubenswrapper[4813]: E1125 10:47:47.828851 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8781aaec7fe29ba9ac041ad95434f3c53d1617e56f72b4c68a957da330ab14f9\": container with ID starting with 8781aaec7fe29ba9ac041ad95434f3c53d1617e56f72b4c68a957da330ab14f9 not found: ID does not exist" containerID="8781aaec7fe29ba9ac041ad95434f3c53d1617e56f72b4c68a957da330ab14f9" Nov 25 10:47:47 crc kubenswrapper[4813]: I1125 10:47:47.828878 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8781aaec7fe29ba9ac041ad95434f3c53d1617e56f72b4c68a957da330ab14f9"} err="failed to get container status \"8781aaec7fe29ba9ac041ad95434f3c53d1617e56f72b4c68a957da330ab14f9\": rpc error: code = NotFound desc = could not find container \"8781aaec7fe29ba9ac041ad95434f3c53d1617e56f72b4c68a957da330ab14f9\": container with ID starting with 8781aaec7fe29ba9ac041ad95434f3c53d1617e56f72b4c68a957da330ab14f9 not found: ID does not exist" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.224442 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9qf8w"] Nov 25 10:47:48 crc kubenswrapper[4813]: E1125 10:47:48.224795 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" containerName="registry-server" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.224832 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" containerName="registry-server" Nov 25 10:47:48 crc kubenswrapper[4813]: E1125 10:47:48.224848 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" containerName="extract-utilities" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.224918 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" containerName="extract-utilities" Nov 25 10:47:48 crc kubenswrapper[4813]: E1125 10:47:48.224932 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" containerName="extract-content" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.224938 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" containerName="extract-content" Nov 25 10:47:48 crc kubenswrapper[4813]: E1125 10:47:48.224947 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f912baf1-3c8b-4680-bd2b-8f4074eff6d1" containerName="registry-server" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.224955 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f912baf1-3c8b-4680-bd2b-8f4074eff6d1" containerName="registry-server" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.225191 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" containerName="registry-server" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.225276 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f912baf1-3c8b-4680-bd2b-8f4074eff6d1" containerName="registry-server" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.226574 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.237270 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9qf8w"] Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.274077 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-utilities\") pod \"community-operators-9qf8w\" (UID: \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\") " pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.274147 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5n5\" (UniqueName: \"kubernetes.io/projected/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-kube-api-access-md5n5\") pod \"community-operators-9qf8w\" (UID: \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\") " pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.274195 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-catalog-content\") pod \"community-operators-9qf8w\" (UID: \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\") " pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.376250 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md5n5\" (UniqueName: \"kubernetes.io/projected/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-kube-api-access-md5n5\") pod \"community-operators-9qf8w\" (UID: \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\") " pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.376787 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-catalog-content\") pod \"community-operators-9qf8w\" (UID: \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\") " pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.377274 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-catalog-content\") pod \"community-operators-9qf8w\" (UID: \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\") " pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.377428 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-utilities\") pod \"community-operators-9qf8w\" (UID: \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\") " pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.377747 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-utilities\") pod \"community-operators-9qf8w\" (UID: \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\") " pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.396994 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md5n5\" (UniqueName: \"kubernetes.io/projected/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-kube-api-access-md5n5\") pod \"community-operators-9qf8w\" (UID: \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\") " pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.550000 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:48 crc kubenswrapper[4813]: I1125 10:47:48.992751 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9qf8w"] Nov 25 10:47:49 crc kubenswrapper[4813]: I1125 10:47:49.631892 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23" path="/var/lib/kubelet/pods/1fe425e3-1b9f-4502-a5b3-d0dcacfb6e23/volumes" Nov 25 10:47:49 crc kubenswrapper[4813]: I1125 10:47:49.779996 4813 generic.go:334] "Generic (PLEG): container finished" podID="c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" containerID="28b1843c78632b9db81c2d223adc62e2673be1df0d87214d5b5a3a987cd3fe1a" exitCode=0 Nov 25 10:47:49 crc kubenswrapper[4813]: I1125 10:47:49.780043 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qf8w" event={"ID":"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb","Type":"ContainerDied","Data":"28b1843c78632b9db81c2d223adc62e2673be1df0d87214d5b5a3a987cd3fe1a"} Nov 25 10:47:49 crc kubenswrapper[4813]: I1125 10:47:49.780069 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qf8w" event={"ID":"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb","Type":"ContainerStarted","Data":"2d3bb9496b7a2060a7e5e6d160066291509ed463c03f9fbbde40555b06086cea"} Nov 25 10:47:52 crc kubenswrapper[4813]: I1125 10:47:52.800251 4813 generic.go:334] "Generic (PLEG): container finished" podID="c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" containerID="97d3ebc3e3f3ebbedfe95de63533050e21e3e1d74feeb7d94fbfd7ef31be2be8" exitCode=0 Nov 25 10:47:52 crc kubenswrapper[4813]: I1125 10:47:52.800395 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qf8w" event={"ID":"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb","Type":"ContainerDied","Data":"97d3ebc3e3f3ebbedfe95de63533050e21e3e1d74feeb7d94fbfd7ef31be2be8"} Nov 25 10:47:54 crc kubenswrapper[4813]: I1125 10:47:54.815946 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qf8w" event={"ID":"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb","Type":"ContainerStarted","Data":"f99138ba96c75fed7b20e4328da2feceb06efc3c3ffa875b90c7197112b62bc2"} Nov 25 10:47:54 crc kubenswrapper[4813]: I1125 10:47:54.836319 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9qf8w" podStartSLOduration=2.902382841 podStartE2EDuration="6.836301592s" podCreationTimestamp="2025-11-25 10:47:48 +0000 UTC" firstStartedPulling="2025-11-25 10:47:49.782958338 +0000 UTC m=+966.912668224" lastFinishedPulling="2025-11-25 10:47:53.716877089 +0000 UTC m=+970.846586975" observedRunningTime="2025-11-25 10:47:54.832178035 +0000 UTC m=+971.961887931" watchObservedRunningTime="2025-11-25 10:47:54.836301592 +0000 UTC m=+971.966011478" Nov 25 10:47:55 crc kubenswrapper[4813]: I1125 10:47:55.704314 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz"] Nov 25 10:47:55 crc kubenswrapper[4813]: I1125 10:47:55.705811 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:47:55 crc kubenswrapper[4813]: I1125 10:47:55.707287 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-z2dqk" Nov 25 10:47:55 crc kubenswrapper[4813]: I1125 10:47:55.725251 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz"] Nov 25 10:47:55 crc kubenswrapper[4813]: I1125 10:47:55.892048 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5722665a-1565-49c6-887f-4ed446b4efd4-util\") pod \"a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz\" (UID: \"5722665a-1565-49c6-887f-4ed446b4efd4\") " pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:47:55 crc kubenswrapper[4813]: I1125 10:47:55.892397 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5722665a-1565-49c6-887f-4ed446b4efd4-bundle\") pod \"a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz\" (UID: \"5722665a-1565-49c6-887f-4ed446b4efd4\") " pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:47:55 crc kubenswrapper[4813]: I1125 10:47:55.892447 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t72hz\" (UniqueName: \"kubernetes.io/projected/5722665a-1565-49c6-887f-4ed446b4efd4-kube-api-access-t72hz\") pod \"a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz\" (UID: \"5722665a-1565-49c6-887f-4ed446b4efd4\") " pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:47:55 crc kubenswrapper[4813]: I1125 10:47:55.993600 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5722665a-1565-49c6-887f-4ed446b4efd4-util\") pod \"a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz\" (UID: \"5722665a-1565-49c6-887f-4ed446b4efd4\") " pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:47:55 crc kubenswrapper[4813]: I1125 10:47:55.993765 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5722665a-1565-49c6-887f-4ed446b4efd4-bundle\") pod \"a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz\" (UID: \"5722665a-1565-49c6-887f-4ed446b4efd4\") " pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:47:55 crc kubenswrapper[4813]: I1125 10:47:55.993837 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t72hz\" (UniqueName: \"kubernetes.io/projected/5722665a-1565-49c6-887f-4ed446b4efd4-kube-api-access-t72hz\") pod \"a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz\" (UID: \"5722665a-1565-49c6-887f-4ed446b4efd4\") " pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:47:55 crc kubenswrapper[4813]: I1125 10:47:55.994360 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5722665a-1565-49c6-887f-4ed446b4efd4-util\") pod \"a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz\" (UID: \"5722665a-1565-49c6-887f-4ed446b4efd4\") " pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:47:55 crc kubenswrapper[4813]: I1125 10:47:55.994758 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5722665a-1565-49c6-887f-4ed446b4efd4-bundle\") pod \"a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz\" (UID: \"5722665a-1565-49c6-887f-4ed446b4efd4\") " pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:47:56 crc kubenswrapper[4813]: I1125 10:47:56.021657 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t72hz\" (UniqueName: \"kubernetes.io/projected/5722665a-1565-49c6-887f-4ed446b4efd4-kube-api-access-t72hz\") pod \"a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz\" (UID: \"5722665a-1565-49c6-887f-4ed446b4efd4\") " pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:47:56 crc kubenswrapper[4813]: I1125 10:47:56.030360 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:47:56 crc kubenswrapper[4813]: I1125 10:47:56.312620 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz"] Nov 25 10:47:56 crc kubenswrapper[4813]: I1125 10:47:56.829003 4813 generic.go:334] "Generic (PLEG): container finished" podID="5722665a-1565-49c6-887f-4ed446b4efd4" containerID="399f80b29023409553932a98d89df8e285cce03b05f993c51fa9f527052ea569" exitCode=0 Nov 25 10:47:56 crc kubenswrapper[4813]: I1125 10:47:56.829055 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" event={"ID":"5722665a-1565-49c6-887f-4ed446b4efd4","Type":"ContainerDied","Data":"399f80b29023409553932a98d89df8e285cce03b05f993c51fa9f527052ea569"} Nov 25 10:47:56 crc kubenswrapper[4813]: I1125 10:47:56.829142 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" event={"ID":"5722665a-1565-49c6-887f-4ed446b4efd4","Type":"ContainerStarted","Data":"5a3420fbfbce1e1bf7a65b966bb7d14fec124898c85b3986ef453c2fa909d6b5"} Nov 25 10:47:58 crc kubenswrapper[4813]: I1125 10:47:58.551184 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:58 crc kubenswrapper[4813]: I1125 10:47:58.551812 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:58 crc kubenswrapper[4813]: I1125 10:47:58.593581 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:58 crc kubenswrapper[4813]: I1125 10:47:58.887536 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:47:59 crc kubenswrapper[4813]: I1125 10:47:59.850004 4813 generic.go:334] "Generic (PLEG): container finished" podID="5722665a-1565-49c6-887f-4ed446b4efd4" containerID="27e9bd588fb4af4d318dfa740412f079c95d2d54de63f5588e25eb9e7adbf73d" exitCode=0 Nov 25 10:47:59 crc kubenswrapper[4813]: I1125 10:47:59.850113 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" event={"ID":"5722665a-1565-49c6-887f-4ed446b4efd4","Type":"ContainerDied","Data":"27e9bd588fb4af4d318dfa740412f079c95d2d54de63f5588e25eb9e7adbf73d"} Nov 25 10:48:00 crc kubenswrapper[4813]: I1125 10:48:00.858087 4813 generic.go:334] "Generic (PLEG): container finished" podID="5722665a-1565-49c6-887f-4ed446b4efd4" containerID="0aeb1bc61cede4a0fed59c55bc90092085c07d28b12160b4d6b3de522ceaf153" exitCode=0 Nov 25 10:48:00 crc kubenswrapper[4813]: I1125 10:48:00.858135 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" event={"ID":"5722665a-1565-49c6-887f-4ed446b4efd4","Type":"ContainerDied","Data":"0aeb1bc61cede4a0fed59c55bc90092085c07d28b12160b4d6b3de522ceaf153"} Nov 25 10:48:01 crc kubenswrapper[4813]: I1125 10:48:01.055908 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9qf8w"] Nov 25 10:48:01 crc kubenswrapper[4813]: I1125 10:48:01.056158 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9qf8w" podUID="c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" containerName="registry-server" containerID="cri-o://f99138ba96c75fed7b20e4328da2feceb06efc3c3ffa875b90c7197112b62bc2" gracePeriod=2 Nov 25 10:48:01 crc kubenswrapper[4813]: I1125 10:48:01.869470 4813 generic.go:334] "Generic (PLEG): container finished" podID="c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" containerID="f99138ba96c75fed7b20e4328da2feceb06efc3c3ffa875b90c7197112b62bc2" exitCode=0 Nov 25 10:48:01 crc kubenswrapper[4813]: I1125 10:48:01.869517 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qf8w" event={"ID":"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb","Type":"ContainerDied","Data":"f99138ba96c75fed7b20e4328da2feceb06efc3c3ffa875b90c7197112b62bc2"} Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.000875 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.106855 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.179334 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-catalog-content\") pod \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\" (UID: \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\") " Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.179442 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-utilities\") pod \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\" (UID: \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\") " Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.179492 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md5n5\" (UniqueName: \"kubernetes.io/projected/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-kube-api-access-md5n5\") pod \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\" (UID: \"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb\") " Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.181180 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-utilities" (OuterVolumeSpecName: "utilities") pod "c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" (UID: "c1cf39c5-4fdc-44c7-914a-8be1dfc199eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.185795 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-kube-api-access-md5n5" (OuterVolumeSpecName: "kube-api-access-md5n5") pod "c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" (UID: "c1cf39c5-4fdc-44c7-914a-8be1dfc199eb"). InnerVolumeSpecName "kube-api-access-md5n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.246345 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" (UID: "c1cf39c5-4fdc-44c7-914a-8be1dfc199eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.281119 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t72hz\" (UniqueName: \"kubernetes.io/projected/5722665a-1565-49c6-887f-4ed446b4efd4-kube-api-access-t72hz\") pod \"5722665a-1565-49c6-887f-4ed446b4efd4\" (UID: \"5722665a-1565-49c6-887f-4ed446b4efd4\") " Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.281287 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5722665a-1565-49c6-887f-4ed446b4efd4-util\") pod \"5722665a-1565-49c6-887f-4ed446b4efd4\" (UID: \"5722665a-1565-49c6-887f-4ed446b4efd4\") " Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.281333 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5722665a-1565-49c6-887f-4ed446b4efd4-bundle\") pod \"5722665a-1565-49c6-887f-4ed446b4efd4\" (UID: \"5722665a-1565-49c6-887f-4ed446b4efd4\") " Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.281839 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md5n5\" (UniqueName: \"kubernetes.io/projected/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-kube-api-access-md5n5\") on node \"crc\" DevicePath \"\"" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.281886 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.281910 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.282836 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5722665a-1565-49c6-887f-4ed446b4efd4-bundle" (OuterVolumeSpecName: "bundle") pod "5722665a-1565-49c6-887f-4ed446b4efd4" (UID: "5722665a-1565-49c6-887f-4ed446b4efd4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.285026 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5722665a-1565-49c6-887f-4ed446b4efd4-kube-api-access-t72hz" (OuterVolumeSpecName: "kube-api-access-t72hz") pod "5722665a-1565-49c6-887f-4ed446b4efd4" (UID: "5722665a-1565-49c6-887f-4ed446b4efd4"). InnerVolumeSpecName "kube-api-access-t72hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.382832 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5722665a-1565-49c6-887f-4ed446b4efd4-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.382869 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t72hz\" (UniqueName: \"kubernetes.io/projected/5722665a-1565-49c6-887f-4ed446b4efd4-kube-api-access-t72hz\") on node \"crc\" DevicePath \"\"" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.847962 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5722665a-1565-49c6-887f-4ed446b4efd4-util" (OuterVolumeSpecName: "util") pod "5722665a-1565-49c6-887f-4ed446b4efd4" (UID: "5722665a-1565-49c6-887f-4ed446b4efd4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.880661 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.880719 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz" event={"ID":"5722665a-1565-49c6-887f-4ed446b4efd4","Type":"ContainerDied","Data":"5a3420fbfbce1e1bf7a65b966bb7d14fec124898c85b3986ef453c2fa909d6b5"} Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.881252 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a3420fbfbce1e1bf7a65b966bb7d14fec124898c85b3986ef453c2fa909d6b5" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.883201 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qf8w" event={"ID":"c1cf39c5-4fdc-44c7-914a-8be1dfc199eb","Type":"ContainerDied","Data":"2d3bb9496b7a2060a7e5e6d160066291509ed463c03f9fbbde40555b06086cea"} Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.883264 4813 scope.go:117] "RemoveContainer" containerID="f99138ba96c75fed7b20e4328da2feceb06efc3c3ffa875b90c7197112b62bc2" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.883414 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qf8w" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.890289 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5722665a-1565-49c6-887f-4ed446b4efd4-util\") on node \"crc\" DevicePath \"\"" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.916310 4813 scope.go:117] "RemoveContainer" containerID="97d3ebc3e3f3ebbedfe95de63533050e21e3e1d74feeb7d94fbfd7ef31be2be8" Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.924026 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9qf8w"] Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.928927 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9qf8w"] Nov 25 10:48:02 crc kubenswrapper[4813]: I1125 10:48:02.955959 4813 scope.go:117] "RemoveContainer" containerID="28b1843c78632b9db81c2d223adc62e2673be1df0d87214d5b5a3a987cd3fe1a" Nov 25 10:48:03 crc kubenswrapper[4813]: I1125 10:48:03.630385 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" path="/var/lib/kubelet/pods/c1cf39c5-4fdc-44c7-914a-8be1dfc199eb/volumes" Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.888621 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h"] Nov 25 10:48:06 crc kubenswrapper[4813]: E1125 10:48:06.890317 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5722665a-1565-49c6-887f-4ed446b4efd4" containerName="extract" Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.890440 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5722665a-1565-49c6-887f-4ed446b4efd4" containerName="extract" Nov 25 10:48:06 crc kubenswrapper[4813]: E1125 10:48:06.890524 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5722665a-1565-49c6-887f-4ed446b4efd4" containerName="util" Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.890590 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5722665a-1565-49c6-887f-4ed446b4efd4" containerName="util" Nov 25 10:48:06 crc kubenswrapper[4813]: E1125 10:48:06.890667 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" containerName="extract-content" Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.890780 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" containerName="extract-content" Nov 25 10:48:06 crc kubenswrapper[4813]: E1125 10:48:06.890851 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" containerName="extract-utilities" Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.890919 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" containerName="extract-utilities" Nov 25 10:48:06 crc kubenswrapper[4813]: E1125 10:48:06.891002 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" containerName="registry-server" Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.891067 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" containerName="registry-server" Nov 25 10:48:06 crc kubenswrapper[4813]: E1125 10:48:06.891148 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5722665a-1565-49c6-887f-4ed446b4efd4" containerName="pull" Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.891213 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5722665a-1565-49c6-887f-4ed446b4efd4" containerName="pull" Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.891407 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1cf39c5-4fdc-44c7-914a-8be1dfc199eb" containerName="registry-server" Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.891485 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5722665a-1565-49c6-887f-4ed446b4efd4" containerName="extract" Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.892143 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.897320 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-7htdr" Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.922803 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h"] Nov 25 10:48:06 crc kubenswrapper[4813]: I1125 10:48:06.952074 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv5hh\" (UniqueName: \"kubernetes.io/projected/32603f59-2392-4c3e-9d25-ba1fe7376687-kube-api-access-mv5hh\") pod \"openstack-operator-controller-operator-577fbd7764-z9m8h\" (UID: \"32603f59-2392-4c3e-9d25-ba1fe7376687\") " pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" Nov 25 10:48:07 crc kubenswrapper[4813]: I1125 10:48:07.053459 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv5hh\" (UniqueName: \"kubernetes.io/projected/32603f59-2392-4c3e-9d25-ba1fe7376687-kube-api-access-mv5hh\") pod \"openstack-operator-controller-operator-577fbd7764-z9m8h\" (UID: \"32603f59-2392-4c3e-9d25-ba1fe7376687\") " pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" Nov 25 10:48:07 crc kubenswrapper[4813]: I1125 10:48:07.073839 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv5hh\" (UniqueName: \"kubernetes.io/projected/32603f59-2392-4c3e-9d25-ba1fe7376687-kube-api-access-mv5hh\") pod \"openstack-operator-controller-operator-577fbd7764-z9m8h\" (UID: \"32603f59-2392-4c3e-9d25-ba1fe7376687\") " pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" Nov 25 10:48:07 crc kubenswrapper[4813]: I1125 10:48:07.210219 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" Nov 25 10:48:07 crc kubenswrapper[4813]: I1125 10:48:07.422932 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h"] Nov 25 10:48:07 crc kubenswrapper[4813]: I1125 10:48:07.924014 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" event={"ID":"32603f59-2392-4c3e-9d25-ba1fe7376687","Type":"ContainerStarted","Data":"07e31a7ebf908fe16b01b8079adaeebcc6a95e798f3d27edb3be49d4b9423214"} Nov 25 10:48:16 crc kubenswrapper[4813]: I1125 10:48:16.990383 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" event={"ID":"32603f59-2392-4c3e-9d25-ba1fe7376687","Type":"ContainerStarted","Data":"68449d85180117c6a9f528c03ef3e4490850e386f64be4f9984c846ecba8bb0e"} Nov 25 10:48:16 crc kubenswrapper[4813]: I1125 10:48:16.991008 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" Nov 25 10:48:17 crc kubenswrapper[4813]: I1125 10:48:17.028603 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" podStartSLOduration=2.325661041 podStartE2EDuration="11.028583812s" podCreationTimestamp="2025-11-25 10:48:06 +0000 UTC" firstStartedPulling="2025-11-25 10:48:07.429218765 +0000 UTC m=+984.558928651" lastFinishedPulling="2025-11-25 10:48:16.132141536 +0000 UTC m=+993.261851422" observedRunningTime="2025-11-25 10:48:17.018758923 +0000 UTC m=+994.148468829" watchObservedRunningTime="2025-11-25 10:48:17.028583812 +0000 UTC m=+994.158293698" Nov 25 10:48:27 crc kubenswrapper[4813]: I1125 10:48:27.213494 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.015451 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.018057 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.020505 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jtc2b" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.024777 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.026565 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.028762 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-r45mj" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.033708 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.037341 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.040997 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.043810 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-n8qg8" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.069610 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.081283 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bppht\" (UniqueName: \"kubernetes.io/projected/03c63a63-9a46-4bda-941b-8c5ba81a13fe-kube-api-access-bppht\") pod \"barbican-operator-controller-manager-86dc4d89c8-4wff2\" (UID: \"03c63a63-9a46-4bda-941b-8c5ba81a13fe\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.086415 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.088902 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.089983 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.093205 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6n2p7" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.107055 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.121735 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.122889 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.129816 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-pnvfx" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.135790 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.136856 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.142517 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rf7b9" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.147010 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.149670 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.182284 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkdps\" (UniqueName: \"kubernetes.io/projected/71c5bfc5-a289-4942-bc55-819f06787eb6-kube-api-access-mkdps\") pod \"glance-operator-controller-manager-547cf68667-6v6dd\" (UID: \"71c5bfc5-a289-4942-bc55-819f06787eb6\") " pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.182328 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxdkn\" (UniqueName: \"kubernetes.io/projected/a650bdd3-2541-4b76-b5db-64273262bc06-kube-api-access-cxdkn\") pod \"cinder-operator-controller-manager-79856dc55c-dvfd9\" (UID: \"a650bdd3-2541-4b76-b5db-64273262bc06\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.182393 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bppht\" (UniqueName: \"kubernetes.io/projected/03c63a63-9a46-4bda-941b-8c5ba81a13fe-kube-api-access-bppht\") pod \"barbican-operator-controller-manager-86dc4d89c8-4wff2\" (UID: \"03c63a63-9a46-4bda-941b-8c5ba81a13fe\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.182430 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwq4\" (UniqueName: \"kubernetes.io/projected/aa2934d9-d547-49d0-9d06-232120b44fa1-kube-api-access-njwq4\") pod \"designate-operator-controller-manager-7d695c9b56-hjqzd\" (UID: \"aa2934d9-d547-49d0-9d06-232120b44fa1\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.204667 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.205883 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.214283 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-mfq2v" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.218338 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.219629 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.227552 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.227634 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nrhzm" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.236776 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.237875 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bppht\" (UniqueName: \"kubernetes.io/projected/03c63a63-9a46-4bda-941b-8c5ba81a13fe-kube-api-access-bppht\") pod \"barbican-operator-controller-manager-86dc4d89c8-4wff2\" (UID: \"03c63a63-9a46-4bda-941b-8c5ba81a13fe\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.266408 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.284505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06c81a1e-0461-4457-85ea-1a4060423eda-cert\") pod \"infra-operator-controller-manager-858778c9dc-fs9sm\" (UID: \"06c81a1e-0461-4457-85ea-1a4060423eda\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.284593 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkdps\" (UniqueName: \"kubernetes.io/projected/71c5bfc5-a289-4942-bc55-819f06787eb6-kube-api-access-mkdps\") pod \"glance-operator-controller-manager-547cf68667-6v6dd\" (UID: \"71c5bfc5-a289-4942-bc55-819f06787eb6\") " pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.284622 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdkks\" (UniqueName: \"kubernetes.io/projected/af18e07e-95b3-476f-9604-824c36ae74a5-kube-api-access-fdkks\") pod \"horizon-operator-controller-manager-68c9694994-8spkk\" (UID: \"af18e07e-95b3-476f-9604-824c36ae74a5\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.284668 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nlbz\" (UniqueName: \"kubernetes.io/projected/d4a62556-e6e8-42dc-b7e4-180c40611393-kube-api-access-6nlbz\") pod \"ironic-operator-controller-manager-5bfcdc958c-blrjt\" (UID: \"d4a62556-e6e8-42dc-b7e4-180c40611393\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.284707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxdkn\" (UniqueName: \"kubernetes.io/projected/a650bdd3-2541-4b76-b5db-64273262bc06-kube-api-access-cxdkn\") pod \"cinder-operator-controller-manager-79856dc55c-dvfd9\" (UID: \"a650bdd3-2541-4b76-b5db-64273262bc06\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.284740 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpjnb\" (UniqueName: \"kubernetes.io/projected/06c81a1e-0461-4457-85ea-1a4060423eda-kube-api-access-tpjnb\") pod \"infra-operator-controller-manager-858778c9dc-fs9sm\" (UID: \"06c81a1e-0461-4457-85ea-1a4060423eda\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.284803 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vlhn\" (UniqueName: \"kubernetes.io/projected/eaf6f1c0-6585-4eba-8baf-942ed2503735-kube-api-access-6vlhn\") pod \"heat-operator-controller-manager-774b86978c-f6dvp\" (UID: \"eaf6f1c0-6585-4eba-8baf-942ed2503735\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.286032 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njwq4\" (UniqueName: \"kubernetes.io/projected/aa2934d9-d547-49d0-9d06-232120b44fa1-kube-api-access-njwq4\") pod \"designate-operator-controller-manager-7d695c9b56-hjqzd\" (UID: \"aa2934d9-d547-49d0-9d06-232120b44fa1\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.298992 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.302500 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.313207 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rzl9k" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.332797 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxdkn\" (UniqueName: \"kubernetes.io/projected/a650bdd3-2541-4b76-b5db-64273262bc06-kube-api-access-cxdkn\") pod \"cinder-operator-controller-manager-79856dc55c-dvfd9\" (UID: \"a650bdd3-2541-4b76-b5db-64273262bc06\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.334052 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.335108 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.338237 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-j697j" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.339523 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkdps\" (UniqueName: \"kubernetes.io/projected/71c5bfc5-a289-4942-bc55-819f06787eb6-kube-api-access-mkdps\") pod \"glance-operator-controller-manager-547cf68667-6v6dd\" (UID: \"71c5bfc5-a289-4942-bc55-819f06787eb6\") " pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.340477 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njwq4\" (UniqueName: \"kubernetes.io/projected/aa2934d9-d547-49d0-9d06-232120b44fa1-kube-api-access-njwq4\") pod \"designate-operator-controller-manager-7d695c9b56-hjqzd\" (UID: \"aa2934d9-d547-49d0-9d06-232120b44fa1\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.342340 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.344609 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.345864 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.348508 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-t7lft" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.355918 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.360273 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.379892 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.392481 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.393320 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsq8k\" (UniqueName: \"kubernetes.io/projected/efca9205-8a59-45ce-8c50-36b0d0389f12-kube-api-access-bsq8k\") pod \"manila-operator-controller-manager-58bb8d67cc-jcjzx\" (UID: \"efca9205-8a59-45ce-8c50-36b0d0389f12\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.393396 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vlhn\" (UniqueName: \"kubernetes.io/projected/eaf6f1c0-6585-4eba-8baf-942ed2503735-kube-api-access-6vlhn\") pod \"heat-operator-controller-manager-774b86978c-f6dvp\" (UID: \"eaf6f1c0-6585-4eba-8baf-942ed2503735\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.393479 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06c81a1e-0461-4457-85ea-1a4060423eda-cert\") pod \"infra-operator-controller-manager-858778c9dc-fs9sm\" (UID: \"06c81a1e-0461-4457-85ea-1a4060423eda\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.393510 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdkks\" (UniqueName: \"kubernetes.io/projected/af18e07e-95b3-476f-9604-824c36ae74a5-kube-api-access-fdkks\") pod \"horizon-operator-controller-manager-68c9694994-8spkk\" (UID: \"af18e07e-95b3-476f-9604-824c36ae74a5\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.393544 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nlbz\" (UniqueName: \"kubernetes.io/projected/d4a62556-e6e8-42dc-b7e4-180c40611393-kube-api-access-6nlbz\") pod \"ironic-operator-controller-manager-5bfcdc958c-blrjt\" (UID: \"d4a62556-e6e8-42dc-b7e4-180c40611393\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.393587 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpjnb\" (UniqueName: \"kubernetes.io/projected/06c81a1e-0461-4457-85ea-1a4060423eda-kube-api-access-tpjnb\") pod \"infra-operator-controller-manager-858778c9dc-fs9sm\" (UID: \"06c81a1e-0461-4457-85ea-1a4060423eda\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.410675 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06c81a1e-0461-4457-85ea-1a4060423eda-cert\") pod \"infra-operator-controller-manager-858778c9dc-fs9sm\" (UID: \"06c81a1e-0461-4457-85ea-1a4060423eda\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.414228 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.415581 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.420781 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.424782 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vlhn\" (UniqueName: \"kubernetes.io/projected/eaf6f1c0-6585-4eba-8baf-942ed2503735-kube-api-access-6vlhn\") pod \"heat-operator-controller-manager-774b86978c-f6dvp\" (UID: \"eaf6f1c0-6585-4eba-8baf-942ed2503735\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.436624 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.448240 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4gr2w" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.449264 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-6j272"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.450293 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdkks\" (UniqueName: \"kubernetes.io/projected/af18e07e-95b3-476f-9604-824c36ae74a5-kube-api-access-fdkks\") pod \"horizon-operator-controller-manager-68c9694994-8spkk\" (UID: \"af18e07e-95b3-476f-9604-824c36ae74a5\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.450305 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.453038 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-vwbhq" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.454454 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nlbz\" (UniqueName: \"kubernetes.io/projected/d4a62556-e6e8-42dc-b7e4-180c40611393-kube-api-access-6nlbz\") pod \"ironic-operator-controller-manager-5bfcdc958c-blrjt\" (UID: \"d4a62556-e6e8-42dc-b7e4-180c40611393\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.454576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpjnb\" (UniqueName: \"kubernetes.io/projected/06c81a1e-0461-4457-85ea-1a4060423eda-kube-api-access-tpjnb\") pod \"infra-operator-controller-manager-858778c9dc-fs9sm\" (UID: \"06c81a1e-0461-4457-85ea-1a4060423eda\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.456078 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.470175 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.489427 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.495866 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4z4s\" (UniqueName: \"kubernetes.io/projected/baf6f7bb-db50-4013-8b77-2b7e4c8101c2-kube-api-access-t4z4s\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-5ldjd\" (UID: \"baf6f7bb-db50-4013-8b77-2b7e4c8101c2\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.495954 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk5v7\" (UniqueName: \"kubernetes.io/projected/b69526d6-6616-4536-a228-4cdb57e1881c-kube-api-access-wk5v7\") pod \"neutron-operator-controller-manager-7c57c8bbc4-c6kw6\" (UID: \"b69526d6-6616-4536-a228-4cdb57e1881c\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.495990 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsq8k\" (UniqueName: \"kubernetes.io/projected/efca9205-8a59-45ce-8c50-36b0d0389f12-kube-api-access-bsq8k\") pod \"manila-operator-controller-manager-58bb8d67cc-jcjzx\" (UID: \"efca9205-8a59-45ce-8c50-36b0d0389f12\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.496211 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnxkh\" (UniqueName: \"kubernetes.io/projected/7921584b-8ce0-45b8-8a56-ab0fdde43582-kube-api-access-nnxkh\") pod \"keystone-operator-controller-manager-748dc6576f-76j46\" (UID: \"7921584b-8ce0-45b8-8a56-ab0fdde43582\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.500162 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-6j272"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.523969 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsq8k\" (UniqueName: \"kubernetes.io/projected/efca9205-8a59-45ce-8c50-36b0d0389f12-kube-api-access-bsq8k\") pod \"manila-operator-controller-manager-58bb8d67cc-jcjzx\" (UID: \"efca9205-8a59-45ce-8c50-36b0d0389f12\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.525150 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.526621 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.546167 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.548083 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.549427 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.552184 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-wqm5s" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.553914 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-qgwvs" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.556216 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.557642 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.560155 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.560448 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-v6s5g" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.567355 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.571854 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.575294 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.576638 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.578372 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-gqgjk" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.581522 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.584126 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.597512 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.597579 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzggh\" (UniqueName: \"kubernetes.io/projected/9374bbb0-b458-4c1c-a327-67bcbea83045-kube-api-access-fzggh\") pod \"nova-operator-controller-manager-79556f57fc-6j272\" (UID: \"9374bbb0-b458-4c1c-a327-67bcbea83045\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.597624 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4z4s\" (UniqueName: \"kubernetes.io/projected/baf6f7bb-db50-4013-8b77-2b7e4c8101c2-kube-api-access-t4z4s\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-5ldjd\" (UID: \"baf6f7bb-db50-4013-8b77-2b7e4c8101c2\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.597673 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjvjs\" (UniqueName: \"kubernetes.io/projected/a31ffbb8-0255-45d6-9125-6cccc7b444ba-kube-api-access-vjvjs\") pod \"octavia-operator-controller-manager-fd75fd47d-gjs27\" (UID: \"a31ffbb8-0255-45d6-9125-6cccc7b444ba\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.597718 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk5v7\" (UniqueName: \"kubernetes.io/projected/b69526d6-6616-4536-a228-4cdb57e1881c-kube-api-access-wk5v7\") pod \"neutron-operator-controller-manager-7c57c8bbc4-c6kw6\" (UID: \"b69526d6-6616-4536-a228-4cdb57e1881c\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.597796 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnxkh\" (UniqueName: \"kubernetes.io/projected/7921584b-8ce0-45b8-8a56-ab0fdde43582-kube-api-access-nnxkh\") pod \"keystone-operator-controller-manager-748dc6576f-76j46\" (UID: \"7921584b-8ce0-45b8-8a56-ab0fdde43582\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.602691 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.605764 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.617115 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-cm597" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.621097 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.631627 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4z4s\" (UniqueName: \"kubernetes.io/projected/baf6f7bb-db50-4013-8b77-2b7e4c8101c2-kube-api-access-t4z4s\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-5ldjd\" (UID: \"baf6f7bb-db50-4013-8b77-2b7e4c8101c2\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.650981 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnxkh\" (UniqueName: \"kubernetes.io/projected/7921584b-8ce0-45b8-8a56-ab0fdde43582-kube-api-access-nnxkh\") pod \"keystone-operator-controller-manager-748dc6576f-76j46\" (UID: \"7921584b-8ce0-45b8-8a56-ab0fdde43582\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.653619 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk5v7\" (UniqueName: \"kubernetes.io/projected/b69526d6-6616-4536-a228-4cdb57e1881c-kube-api-access-wk5v7\") pod \"neutron-operator-controller-manager-7c57c8bbc4-c6kw6\" (UID: \"b69526d6-6616-4536-a228-4cdb57e1881c\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.657900 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.666244 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.670087 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8qsg4" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.695916 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.699196 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhffg\" (UniqueName: \"kubernetes.io/projected/db556642-a360-4559-8cde-7c25d7a893e0-kube-api-access-bhffg\") pod \"ovn-operator-controller-manager-66cf5c67ff-tc2mg\" (UID: \"db556642-a360-4559-8cde-7c25d7a893e0\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.699256 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jltmt\" (UniqueName: \"kubernetes.io/projected/0a946ff2-f2e3-48c2-ae3b-774a4ea85492-kube-api-access-jltmt\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-v2clw\" (UID: \"0a946ff2-f2e3-48c2-ae3b-774a4ea85492\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.699298 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzggh\" (UniqueName: \"kubernetes.io/projected/9374bbb0-b458-4c1c-a327-67bcbea83045-kube-api-access-fzggh\") pod \"nova-operator-controller-manager-79556f57fc-6j272\" (UID: \"9374bbb0-b458-4c1c-a327-67bcbea83045\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.699335 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a946ff2-f2e3-48c2-ae3b-774a4ea85492-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-v2clw\" (UID: \"0a946ff2-f2e3-48c2-ae3b-774a4ea85492\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.699369 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2z8s\" (UniqueName: \"kubernetes.io/projected/94c3d2b4-f1bb-402d-a39d-78e16bee970b-kube-api-access-k2z8s\") pod \"swift-operator-controller-manager-6fdc4fcf86-fjkzd\" (UID: \"94c3d2b4-f1bb-402d-a39d-78e16bee970b\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.699398 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngdf8\" (UniqueName: \"kubernetes.io/projected/9093a664-86f3-4349-bd13-0a5e4aca8036-kube-api-access-ngdf8\") pod \"placement-operator-controller-manager-5db546f9d9-2d2x7\" (UID: \"9093a664-86f3-4349-bd13-0a5e4aca8036\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.699437 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjvjs\" (UniqueName: \"kubernetes.io/projected/a31ffbb8-0255-45d6-9125-6cccc7b444ba-kube-api-access-vjvjs\") pod \"octavia-operator-controller-manager-fd75fd47d-gjs27\" (UID: \"a31ffbb8-0255-45d6-9125-6cccc7b444ba\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.731642 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.731766 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzggh\" (UniqueName: \"kubernetes.io/projected/9374bbb0-b458-4c1c-a327-67bcbea83045-kube-api-access-fzggh\") pod \"nova-operator-controller-manager-79556f57fc-6j272\" (UID: \"9374bbb0-b458-4c1c-a327-67bcbea83045\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.733719 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.738312 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-n68f4" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.746962 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.754521 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjvjs\" (UniqueName: \"kubernetes.io/projected/a31ffbb8-0255-45d6-9125-6cccc7b444ba-kube-api-access-vjvjs\") pod \"octavia-operator-controller-manager-fd75fd47d-gjs27\" (UID: \"a31ffbb8-0255-45d6-9125-6cccc7b444ba\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.762945 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.768902 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.808073 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.809069 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tjz4\" (UniqueName: \"kubernetes.io/projected/5f9254c7-c8dc-4504-bdf5-264c78e03b0c-kube-api-access-6tjz4\") pod \"telemetry-operator-controller-manager-567f98c9d-qplf9\" (UID: \"5f9254c7-c8dc-4504-bdf5-264c78e03b0c\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.809104 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhffg\" (UniqueName: \"kubernetes.io/projected/db556642-a360-4559-8cde-7c25d7a893e0-kube-api-access-bhffg\") pod \"ovn-operator-controller-manager-66cf5c67ff-tc2mg\" (UID: \"db556642-a360-4559-8cde-7c25d7a893e0\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.809134 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jltmt\" (UniqueName: \"kubernetes.io/projected/0a946ff2-f2e3-48c2-ae3b-774a4ea85492-kube-api-access-jltmt\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-v2clw\" (UID: \"0a946ff2-f2e3-48c2-ae3b-774a4ea85492\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.809187 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fmxz\" (UniqueName: \"kubernetes.io/projected/49b29226-49bf-4d59-9c7f-998d924bdace-kube-api-access-6fmxz\") pod \"test-operator-controller-manager-5cb74df96-cwrzw\" (UID: \"49b29226-49bf-4d59-9c7f-998d924bdace\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.809232 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a946ff2-f2e3-48c2-ae3b-774a4ea85492-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-v2clw\" (UID: \"0a946ff2-f2e3-48c2-ae3b-774a4ea85492\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.809261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2z8s\" (UniqueName: \"kubernetes.io/projected/94c3d2b4-f1bb-402d-a39d-78e16bee970b-kube-api-access-k2z8s\") pod \"swift-operator-controller-manager-6fdc4fcf86-fjkzd\" (UID: \"94c3d2b4-f1bb-402d-a39d-78e16bee970b\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.809288 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngdf8\" (UniqueName: \"kubernetes.io/projected/9093a664-86f3-4349-bd13-0a5e4aca8036-kube-api-access-ngdf8\") pod \"placement-operator-controller-manager-5db546f9d9-2d2x7\" (UID: \"9093a664-86f3-4349-bd13-0a5e4aca8036\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:48:44 crc kubenswrapper[4813]: E1125 10:48:44.810000 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 10:48:44 crc kubenswrapper[4813]: E1125 10:48:44.810037 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a946ff2-f2e3-48c2-ae3b-774a4ea85492-cert podName:0a946ff2-f2e3-48c2-ae3b-774a4ea85492 nodeName:}" failed. No retries permitted until 2025-11-25 10:48:45.310023524 +0000 UTC m=+1022.439733410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0a946ff2-f2e3-48c2-ae3b-774a4ea85492-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" (UID: "0a946ff2-f2e3-48c2-ae3b-774a4ea85492") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.825060 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.833108 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.850285 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngdf8\" (UniqueName: \"kubernetes.io/projected/9093a664-86f3-4349-bd13-0a5e4aca8036-kube-api-access-ngdf8\") pod \"placement-operator-controller-manager-5db546f9d9-2d2x7\" (UID: \"9093a664-86f3-4349-bd13-0a5e4aca8036\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.850452 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhffg\" (UniqueName: \"kubernetes.io/projected/db556642-a360-4559-8cde-7c25d7a893e0-kube-api-access-bhffg\") pod \"ovn-operator-controller-manager-66cf5c67ff-tc2mg\" (UID: \"db556642-a360-4559-8cde-7c25d7a893e0\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.850595 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2z8s\" (UniqueName: \"kubernetes.io/projected/94c3d2b4-f1bb-402d-a39d-78e16bee970b-kube-api-access-k2z8s\") pod \"swift-operator-controller-manager-6fdc4fcf86-fjkzd\" (UID: \"94c3d2b4-f1bb-402d-a39d-78e16bee970b\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.864143 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-bpbjt"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.877081 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.898385 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-bpbjt"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.898542 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.901030 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jltmt\" (UniqueName: \"kubernetes.io/projected/0a946ff2-f2e3-48c2-ae3b-774a4ea85492-kube-api-access-jltmt\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-v2clw\" (UID: \"0a946ff2-f2e3-48c2-ae3b-774a4ea85492\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.902419 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-cctvq" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.910390 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tjz4\" (UniqueName: \"kubernetes.io/projected/5f9254c7-c8dc-4504-bdf5-264c78e03b0c-kube-api-access-6tjz4\") pod \"telemetry-operator-controller-manager-567f98c9d-qplf9\" (UID: \"5f9254c7-c8dc-4504-bdf5-264c78e03b0c\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.910450 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmxz\" (UniqueName: \"kubernetes.io/projected/49b29226-49bf-4d59-9c7f-998d924bdace-kube-api-access-6fmxz\") pod \"test-operator-controller-manager-5cb74df96-cwrzw\" (UID: \"49b29226-49bf-4d59-9c7f-998d924bdace\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.936210 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tjz4\" (UniqueName: \"kubernetes.io/projected/5f9254c7-c8dc-4504-bdf5-264c78e03b0c-kube-api-access-6tjz4\") pod \"telemetry-operator-controller-manager-567f98c9d-qplf9\" (UID: \"5f9254c7-c8dc-4504-bdf5-264c78e03b0c\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.944053 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fmxz\" (UniqueName: \"kubernetes.io/projected/49b29226-49bf-4d59-9c7f-998d924bdace-kube-api-access-6fmxz\") pod \"test-operator-controller-manager-5cb74df96-cwrzw\" (UID: \"49b29226-49bf-4d59-9c7f-998d924bdace\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.957323 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd"] Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.958451 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.963413 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.964242 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.965220 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-f46kq" Nov 25 10:48:44 crc kubenswrapper[4813]: I1125 10:48:44.989962 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.012155 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5m8t\" (UniqueName: \"kubernetes.io/projected/48ea1018-a88f-4ef0-a82f-7e3b012522ec-kube-api-access-w5m8t\") pod \"watcher-operator-controller-manager-864885998-bpbjt\" (UID: \"48ea1018-a88f-4ef0-a82f-7e3b012522ec\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.014026 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd"] Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.028505 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.039800 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx"] Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.045987 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.050925 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-h8jr8" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.056307 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx"] Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.059435 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.115589 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd"] Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.118318 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67krk\" (UniqueName: \"kubernetes.io/projected/2bf03402-32ec-423d-a6af-657bc0cfeb15-kube-api-access-67krk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qd4tx\" (UID: \"2bf03402-32ec-423d-a6af-657bc0cfeb15\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.118948 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvsf6\" (UniqueName: \"kubernetes.io/projected/09bd1800-0aaa-4908-ac58-e0890a2a309f-kube-api-access-fvsf6\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.119091 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.119212 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.119347 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5m8t\" (UniqueName: \"kubernetes.io/projected/48ea1018-a88f-4ef0-a82f-7e3b012522ec-kube-api-access-w5m8t\") pod \"watcher-operator-controller-manager-864885998-bpbjt\" (UID: \"48ea1018-a88f-4ef0-a82f-7e3b012522ec\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.130337 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2"] Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.164885 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.169673 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5m8t\" (UniqueName: \"kubernetes.io/projected/48ea1018-a88f-4ef0-a82f-7e3b012522ec-kube-api-access-w5m8t\") pod \"watcher-operator-controller-manager-864885998-bpbjt\" (UID: \"48ea1018-a88f-4ef0-a82f-7e3b012522ec\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.180237 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.195335 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" event={"ID":"71c5bfc5-a289-4942-bc55-819f06787eb6","Type":"ContainerStarted","Data":"bb0995ee28e3447f33d660a37e5c0c377d0e338fad680e1deb1a38ae76bcb4ea"} Nov 25 10:48:45 crc kubenswrapper[4813]: W1125 10:48:45.195840 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03c63a63_9a46_4bda_941b_8c5ba81a13fe.slice/crio-4c5b212509a1be7a165cb7a325d1c8b44ae10f556aed8fe61c52fb6fdf4a1bcd WatchSource:0}: Error finding container 4c5b212509a1be7a165cb7a325d1c8b44ae10f556aed8fe61c52fb6fdf4a1bcd: Status 404 returned error can't find the container with id 4c5b212509a1be7a165cb7a325d1c8b44ae10f556aed8fe61c52fb6fdf4a1bcd Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.220924 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvsf6\" (UniqueName: \"kubernetes.io/projected/09bd1800-0aaa-4908-ac58-e0890a2a309f-kube-api-access-fvsf6\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.221295 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.221433 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.221509 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67krk\" (UniqueName: \"kubernetes.io/projected/2bf03402-32ec-423d-a6af-657bc0cfeb15-kube-api-access-67krk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qd4tx\" (UID: \"2bf03402-32ec-423d-a6af-657bc0cfeb15\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" Nov 25 10:48:45 crc kubenswrapper[4813]: E1125 10:48:45.221854 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 10:48:45 crc kubenswrapper[4813]: E1125 10:48:45.221903 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-metrics-certs podName:09bd1800-0aaa-4908-ac58-e0890a2a309f nodeName:}" failed. No retries permitted until 2025-11-25 10:48:45.721887213 +0000 UTC m=+1022.851597099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-metrics-certs") pod "openstack-operator-controller-manager-5ffc8f797b-hbwwd" (UID: "09bd1800-0aaa-4908-ac58-e0890a2a309f") : secret "metrics-server-cert" not found Nov 25 10:48:45 crc kubenswrapper[4813]: E1125 10:48:45.222047 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 10:48:45 crc kubenswrapper[4813]: E1125 10:48:45.222075 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-webhook-certs podName:09bd1800-0aaa-4908-ac58-e0890a2a309f nodeName:}" failed. No retries permitted until 2025-11-25 10:48:45.722067569 +0000 UTC m=+1022.851777455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-webhook-certs") pod "openstack-operator-controller-manager-5ffc8f797b-hbwwd" (UID: "09bd1800-0aaa-4908-ac58-e0890a2a309f") : secret "webhook-server-cert" not found Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.225914 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.235783 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9"] Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.259545 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67krk\" (UniqueName: \"kubernetes.io/projected/2bf03402-32ec-423d-a6af-657bc0cfeb15-kube-api-access-67krk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qd4tx\" (UID: \"2bf03402-32ec-423d-a6af-657bc0cfeb15\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.259552 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvsf6\" (UniqueName: \"kubernetes.io/projected/09bd1800-0aaa-4908-ac58-e0890a2a309f-kube-api-access-fvsf6\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.273809 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd"] Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.325960 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a946ff2-f2e3-48c2-ae3b-774a4ea85492-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-v2clw\" (UID: \"0a946ff2-f2e3-48c2-ae3b-774a4ea85492\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:48:45 crc kubenswrapper[4813]: E1125 10:48:45.326095 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 10:48:45 crc kubenswrapper[4813]: E1125 10:48:45.326144 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a946ff2-f2e3-48c2-ae3b-774a4ea85492-cert podName:0a946ff2-f2e3-48c2-ae3b-774a4ea85492 nodeName:}" failed. No retries permitted until 2025-11-25 10:48:46.326129495 +0000 UTC m=+1023.455839381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0a946ff2-f2e3-48c2-ae3b-774a4ea85492-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" (UID: "0a946ff2-f2e3-48c2-ae3b-774a4ea85492") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.400116 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.635511 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp"] Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.651815 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk"] Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.732246 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.732341 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:45 crc kubenswrapper[4813]: E1125 10:48:45.732441 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 10:48:45 crc kubenswrapper[4813]: E1125 10:48:45.732512 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-webhook-certs podName:09bd1800-0aaa-4908-ac58-e0890a2a309f nodeName:}" failed. No retries permitted until 2025-11-25 10:48:46.732494769 +0000 UTC m=+1023.862204655 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-webhook-certs") pod "openstack-operator-controller-manager-5ffc8f797b-hbwwd" (UID: "09bd1800-0aaa-4908-ac58-e0890a2a309f") : secret "webhook-server-cert" not found Nov 25 10:48:45 crc kubenswrapper[4813]: E1125 10:48:45.732527 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 10:48:45 crc kubenswrapper[4813]: E1125 10:48:45.732602 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-metrics-certs podName:09bd1800-0aaa-4908-ac58-e0890a2a309f nodeName:}" failed. No retries permitted until 2025-11-25 10:48:46.732582812 +0000 UTC m=+1023.862292768 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-metrics-certs") pod "openstack-operator-controller-manager-5ffc8f797b-hbwwd" (UID: "09bd1800-0aaa-4908-ac58-e0890a2a309f") : secret "metrics-server-cert" not found Nov 25 10:48:45 crc kubenswrapper[4813]: I1125 10:48:45.750375 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm"] Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.172524 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-6j272"] Nov 25 10:48:46 crc kubenswrapper[4813]: W1125 10:48:46.179254 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9374bbb0_b458_4c1c_a327_67bcbea83045.slice/crio-f61d67014cc29e2f5a7e5c8f1a65f87c63a61c153cbe89bf25b24d33cace62d7 WatchSource:0}: Error finding container f61d67014cc29e2f5a7e5c8f1a65f87c63a61c153cbe89bf25b24d33cace62d7: Status 404 returned error can't find the container with id f61d67014cc29e2f5a7e5c8f1a65f87c63a61c153cbe89bf25b24d33cace62d7 Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.184159 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.194591 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27"] Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.224133 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46"] Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.255173 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" event={"ID":"03c63a63-9a46-4bda-941b-8c5ba81a13fe","Type":"ContainerStarted","Data":"4c5b212509a1be7a165cb7a325d1c8b44ae10f556aed8fe61c52fb6fdf4a1bcd"} Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.257511 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd"] Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.262045 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" event={"ID":"eaf6f1c0-6585-4eba-8baf-942ed2503735","Type":"ContainerStarted","Data":"43d1645ca830851824ba423eb9910b3f69fca6ed29de52343a3e3306cf5e78cf"} Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.262516 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg"] Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.267835 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt"] Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.273092 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7"] Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.279418 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx"] Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.282466 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67krk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-qd4tx_openstack-operators(2bf03402-32ec-423d-a6af-657bc0cfeb15): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.283746 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.284248 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd"] Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.285116 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6fmxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-cwrzw_openstack-operators(49b29226-49bf-4d59-9c7f-998d924bdace): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.295420 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" event={"ID":"06c81a1e-0461-4457-85ea-1a4060423eda","Type":"ContainerStarted","Data":"dabddfe4e040362c839ec88c623e6217ecf2bbfe134afdc94df5381cb3359efa"} Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.295656 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6fmxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-cwrzw_openstack-operators(49b29226-49bf-4d59-9c7f-998d924bdace): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.298881 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" podUID="49b29226-49bf-4d59-9c7f-998d924bdace" Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.312903 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-bpbjt"] Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.330125 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wk5v7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-c6kw6_openstack-operators(b69526d6-6616-4536-a228-4cdb57e1881c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.330218 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" event={"ID":"af18e07e-95b3-476f-9604-824c36ae74a5","Type":"ContainerStarted","Data":"ab0f92153309147f4cbc5d89d9b59bcd5e5a98517bfb6e77770bae1b441ba497"} Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.341986 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a946ff2-f2e3-48c2-ae3b-774a4ea85492-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-v2clw\" (UID: \"0a946ff2-f2e3-48c2-ae3b-774a4ea85492\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.350150 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9"] Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.388157 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t4z4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_openstack-operators(baf6f7bb-db50-4013-8b77-2b7e4c8101c2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.388248 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6"] Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.388941 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a946ff2-f2e3-48c2-ae3b-774a4ea85492-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-v2clw\" (UID: \"0a946ff2-f2e3-48c2-ae3b-774a4ea85492\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.396752 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" event={"ID":"9374bbb0-b458-4c1c-a327-67bcbea83045","Type":"ContainerStarted","Data":"f61d67014cc29e2f5a7e5c8f1a65f87c63a61c153cbe89bf25b24d33cace62d7"} Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.398352 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wk5v7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-c6kw6_openstack-operators(b69526d6-6616-4536-a228-4cdb57e1881c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.399732 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.405030 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx"] Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.405078 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" event={"ID":"aa2934d9-d547-49d0-9d06-232120b44fa1","Type":"ContainerStarted","Data":"4c3922bca2352b011201bb54c07f34472ccf5a51d3f0a78e6a8f58304c5fedbe"} Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.407924 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t4z4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_openstack-operators(baf6f7bb-db50-4013-8b77-2b7e4c8101c2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.409152 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podUID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" Nov 25 10:48:46 crc kubenswrapper[4813]: W1125 10:48:46.412499 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f9254c7_c8dc_4504_bdf5_264c78e03b0c.slice/crio-f604ec0cd152091528f0a5618cf0dd49bbfc68d25208b67b64bc68615bb7d5a5 WatchSource:0}: Error finding container f604ec0cd152091528f0a5618cf0dd49bbfc68d25208b67b64bc68615bb7d5a5: Status 404 returned error can't find the container with id f604ec0cd152091528f0a5618cf0dd49bbfc68d25208b67b64bc68615bb7d5a5 Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.425245 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw"] Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.457543 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.458175 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" event={"ID":"a650bdd3-2541-4b76-b5db-64273262bc06","Type":"ContainerStarted","Data":"f6f013a0000bb7e6cbb9105ec3213170f9a97bc28c8f67f112ca31753a70bcdd"} Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.466728 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6tjz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.540238 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6tjz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.541452 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.761592 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:46 crc kubenswrapper[4813]: I1125 10:48:46.761811 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.762396 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.762437 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-webhook-certs podName:09bd1800-0aaa-4908-ac58-e0890a2a309f nodeName:}" failed. No retries permitted until 2025-11-25 10:48:48.762423719 +0000 UTC m=+1025.892133605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-webhook-certs") pod "openstack-operator-controller-manager-5ffc8f797b-hbwwd" (UID: "09bd1800-0aaa-4908-ac58-e0890a2a309f") : secret "webhook-server-cert" not found Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.763190 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 10:48:46 crc kubenswrapper[4813]: E1125 10:48:46.763218 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-metrics-certs podName:09bd1800-0aaa-4908-ac58-e0890a2a309f nodeName:}" failed. No retries permitted until 2025-11-25 10:48:48.763208411 +0000 UTC m=+1025.892918297 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-metrics-certs") pod "openstack-operator-controller-manager-5ffc8f797b-hbwwd" (UID: "09bd1800-0aaa-4908-ac58-e0890a2a309f") : secret "metrics-server-cert" not found Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.168923 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw"] Nov 25 10:48:47 crc kubenswrapper[4813]: W1125 10:48:47.223865 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a946ff2_f2e3_48c2_ae3b_774a4ea85492.slice/crio-2c757c7a9057382700c830b3d7d945710148a804c46b698832f1a4acca4c3ee9 WatchSource:0}: Error finding container 2c757c7a9057382700c830b3d7d945710148a804c46b698832f1a4acca4c3ee9: Status 404 returned error can't find the container with id 2c757c7a9057382700c830b3d7d945710148a804c46b698832f1a4acca4c3ee9 Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.468092 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" event={"ID":"b69526d6-6616-4536-a228-4cdb57e1881c","Type":"ContainerStarted","Data":"462a0fb913cd93b7dfb2932e1c893e5b4e7d8e1652b76848ad155e24d2a6c866"} Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.474563 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" event={"ID":"5f9254c7-c8dc-4504-bdf5-264c78e03b0c","Type":"ContainerStarted","Data":"f604ec0cd152091528f0a5618cf0dd49bbfc68d25208b67b64bc68615bb7d5a5"} Nov 25 10:48:47 crc kubenswrapper[4813]: E1125 10:48:47.477726 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:48:47 crc kubenswrapper[4813]: E1125 10:48:47.478474 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.479559 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" event={"ID":"efca9205-8a59-45ce-8c50-36b0d0389f12","Type":"ContainerStarted","Data":"f5c5595629676d4c6437316e60a24aedbceb60c92711e043785d515ac7591fca"} Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.481625 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" event={"ID":"2bf03402-32ec-423d-a6af-657bc0cfeb15","Type":"ContainerStarted","Data":"e7f4494c17224de2d73b82f944b97bf1cc1e9148cc199eb3578f0be3a0db26f4"} Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.487505 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" event={"ID":"9093a664-86f3-4349-bd13-0a5e4aca8036","Type":"ContainerStarted","Data":"7b8fb4f3520889f1dcf963036be2fa0183e5a537bbd46fd8bb2ee500edd6feac"} Nov 25 10:48:47 crc kubenswrapper[4813]: E1125 10:48:47.488589 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.490909 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" event={"ID":"db556642-a360-4559-8cde-7c25d7a893e0","Type":"ContainerStarted","Data":"3eeb5252270a7ecd113bd8fc762518b9955e8ab11c9cb97a22a38141bd009589"} Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.500242 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" event={"ID":"0a946ff2-f2e3-48c2-ae3b-774a4ea85492","Type":"ContainerStarted","Data":"2c757c7a9057382700c830b3d7d945710148a804c46b698832f1a4acca4c3ee9"} Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.502265 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" event={"ID":"baf6f7bb-db50-4013-8b77-2b7e4c8101c2","Type":"ContainerStarted","Data":"d4121ddae925ca9db9b97122055bad994732ba64d182bdb90e6be98a27beb12a"} Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.509122 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" event={"ID":"94c3d2b4-f1bb-402d-a39d-78e16bee970b","Type":"ContainerStarted","Data":"e3ab4f2fcd6da43243b1277308f1bf56461531e9200fcab7c33572e5527ca4fc"} Nov 25 10:48:47 crc kubenswrapper[4813]: E1125 10:48:47.509224 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podUID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.513168 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" event={"ID":"a31ffbb8-0255-45d6-9125-6cccc7b444ba","Type":"ContainerStarted","Data":"bb6672873040bfd5cc53636f479ad420294117c07214fc8de01d7c09bd1f8475"} Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.523550 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" event={"ID":"7921584b-8ce0-45b8-8a56-ab0fdde43582","Type":"ContainerStarted","Data":"942c952dcb19392656801fb39cd5d6313c779a6325a61626f71fb1c2939e9f02"} Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.526943 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" event={"ID":"d4a62556-e6e8-42dc-b7e4-180c40611393","Type":"ContainerStarted","Data":"87af80e1f970c5c36ab1036808a0f5f424d169bd2cedb4ef973b49aa7e0656e2"} Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.538865 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" event={"ID":"48ea1018-a88f-4ef0-a82f-7e3b012522ec","Type":"ContainerStarted","Data":"926df4d961790492339d067a0d6317f3c42acb9ee00a4d4ba60575e868c03c0b"} Nov 25 10:48:47 crc kubenswrapper[4813]: I1125 10:48:47.544223 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" event={"ID":"49b29226-49bf-4d59-9c7f-998d924bdace","Type":"ContainerStarted","Data":"94e5d015d6dc8679601b976734a538745ac6caaba4242cfea44b388ff7bd7181"} Nov 25 10:48:47 crc kubenswrapper[4813]: E1125 10:48:47.555659 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" podUID="49b29226-49bf-4d59-9c7f-998d924bdace" Nov 25 10:48:48 crc kubenswrapper[4813]: E1125 10:48:48.557125 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:48:48 crc kubenswrapper[4813]: E1125 10:48:48.558021 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:48:48 crc kubenswrapper[4813]: E1125 10:48:48.561998 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podUID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" Nov 25 10:48:48 crc kubenswrapper[4813]: E1125 10:48:48.563300 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" podUID="49b29226-49bf-4d59-9c7f-998d924bdace" Nov 25 10:48:48 crc kubenswrapper[4813]: E1125 10:48:48.571039 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:48:48 crc kubenswrapper[4813]: I1125 10:48:48.816556 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:48 crc kubenswrapper[4813]: I1125 10:48:48.816742 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:48 crc kubenswrapper[4813]: I1125 10:48:48.823127 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:48 crc kubenswrapper[4813]: I1125 10:48:48.823934 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09bd1800-0aaa-4908-ac58-e0890a2a309f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffc8f797b-hbwwd\" (UID: \"09bd1800-0aaa-4908-ac58-e0890a2a309f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:48 crc kubenswrapper[4813]: I1125 10:48:48.984727 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:48:51 crc kubenswrapper[4813]: I1125 10:48:51.967040 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:48:51 crc kubenswrapper[4813]: I1125 10:48:51.967125 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:48:59 crc kubenswrapper[4813]: E1125 10:48:59.366504 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7" Nov 25 10:48:59 crc kubenswrapper[4813]: E1125 10:48:59.367341 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fzggh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-6j272_openstack-operators(9374bbb0-b458-4c1c-a327-67bcbea83045): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:49:00 crc kubenswrapper[4813]: E1125 10:49:00.567452 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96" Nov 25 10:49:00 crc kubenswrapper[4813]: E1125 10:49:00.567669 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vlhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-774b86978c-f6dvp_openstack-operators(eaf6f1c0-6585-4eba-8baf-942ed2503735): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:49:01 crc kubenswrapper[4813]: E1125 10:49:01.189021 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd" Nov 25 10:49:01 crc kubenswrapper[4813]: E1125 10:49:01.203782 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jltmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-544b9bb9-v2clw_openstack-operators(0a946ff2-f2e3-48c2-ae3b-774a4ea85492): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:49:01 crc kubenswrapper[4813]: E1125 10:49:01.347153 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.27:5001/openstack-k8s-operators/glance-operator:c9b3d6b317fe7a16a5ab2845a8484f3d4d6d6aa9" Nov 25 10:49:01 crc kubenswrapper[4813]: E1125 10:49:01.347213 4813 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.27:5001/openstack-k8s-operators/glance-operator:c9b3d6b317fe7a16a5ab2845a8484f3d4d6d6aa9" Nov 25 10:49:01 crc kubenswrapper[4813]: E1125 10:49:01.347374 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.27:5001/openstack-k8s-operators/glance-operator:c9b3d6b317fe7a16a5ab2845a8484f3d4d6d6aa9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mkdps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-547cf68667-6v6dd_openstack-operators(71c5bfc5-a289-4942-bc55-819f06787eb6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:49:01 crc kubenswrapper[4813]: E1125 10:49:01.941537 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f" Nov 25 10:49:01 crc kubenswrapper[4813]: E1125 10:49:01.942174 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w5m8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:49:02 crc kubenswrapper[4813]: I1125 10:49:02.858745 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd"] Nov 25 10:49:03 crc kubenswrapper[4813]: I1125 10:49:03.676003 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" event={"ID":"09bd1800-0aaa-4908-ac58-e0890a2a309f","Type":"ContainerStarted","Data":"1af4b54d1e268de704180116bb5e4210a1d4b4777804228942079efe249661bc"} Nov 25 10:49:04 crc kubenswrapper[4813]: E1125 10:49:04.244520 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cxdkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-79856dc55c-dvfd9_openstack-operators(a650bdd3-2541-4b76-b5db-64273262bc06): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 10:49:04 crc kubenswrapper[4813]: E1125 10:49:04.249043 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.698072 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" event={"ID":"d4a62556-e6e8-42dc-b7e4-180c40611393","Type":"ContainerStarted","Data":"cd23090653c4496ed50af88277f58037a85197cd76c6f114b1b622608779a790"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.704133 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" event={"ID":"aa2934d9-d547-49d0-9d06-232120b44fa1","Type":"ContainerStarted","Data":"5f209543dcf3c6c9fb9d1758f99f40344cb7a6cae23f523fdc59a66663c17b52"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.727529 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" event={"ID":"efca9205-8a59-45ce-8c50-36b0d0389f12","Type":"ContainerStarted","Data":"e0cb40cfa7225ebe4e4ed8f072806083611272de9074dba01cdf54df049a2187"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.738603 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" event={"ID":"94c3d2b4-f1bb-402d-a39d-78e16bee970b","Type":"ContainerStarted","Data":"ff9c57c50ce56d5b51d6abe1535d43cb0fba6d50162bcc69bc19e4bf3d433028"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.751297 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" event={"ID":"b69526d6-6616-4536-a228-4cdb57e1881c","Type":"ContainerStarted","Data":"7e6532e096a42d57e3dc09ca3de8f7bdad6af978b55fb5a65084a1ddbdfce036"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.767022 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" event={"ID":"db556642-a360-4559-8cde-7c25d7a893e0","Type":"ContainerStarted","Data":"c7da2017fb3bb645d069c5a5e65e5ebecf25da108fecf7e3d41efdb7ffbd8944"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.782216 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" event={"ID":"a650bdd3-2541-4b76-b5db-64273262bc06","Type":"ContainerStarted","Data":"9d1c0914bdf672c19650bf0626b573178a26fafce469f87675446e083e25d7f1"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.783250 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:49:04 crc kubenswrapper[4813]: E1125 10:49:04.790278 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.791256 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" event={"ID":"09bd1800-0aaa-4908-ac58-e0890a2a309f","Type":"ContainerStarted","Data":"489e7457fc7880bb6b4a3038d3eee48ec357875285b69839b02853bb748ea343"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.791367 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.802017 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" event={"ID":"9093a664-86f3-4349-bd13-0a5e4aca8036","Type":"ContainerStarted","Data":"fb049d972ea51300a04a546dddd9759b2d8d961453b95294c932f7a257597f6f"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.813344 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" event={"ID":"af18e07e-95b3-476f-9604-824c36ae74a5","Type":"ContainerStarted","Data":"842243eec8d9b052ceceececb34b556945beb110af325f4bd64c2f744b4e1647"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.830628 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" event={"ID":"7921584b-8ce0-45b8-8a56-ab0fdde43582","Type":"ContainerStarted","Data":"981d6ff1513f5143a0da7118746e1562edac259524f9d63025c633639fcbd4f7"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.835026 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" event={"ID":"a31ffbb8-0255-45d6-9125-6cccc7b444ba","Type":"ContainerStarted","Data":"8818b6a10215e75e42b1355f20c5537b4b5710923718cbe61419fb5d93da0562"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.859143 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" event={"ID":"03c63a63-9a46-4bda-941b-8c5ba81a13fe","Type":"ContainerStarted","Data":"e1eb0c6c8ed1a13bd8d9f904f6fa9f54b6e8bffa78cd8521b6ff411c256cf6af"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.870737 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" event={"ID":"06c81a1e-0461-4457-85ea-1a4060423eda","Type":"ContainerStarted","Data":"0eaafd13da0467f35b0a7f4465a2e7e47d34f239a8a0985e6c60e616eecd1fbf"} Nov 25 10:49:04 crc kubenswrapper[4813]: I1125 10:49:04.916187 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" podStartSLOduration=20.916165141 podStartE2EDuration="20.916165141s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:49:04.903883132 +0000 UTC m=+1042.033593038" watchObservedRunningTime="2025-11-25 10:49:04.916165141 +0000 UTC m=+1042.045875027" Nov 25 10:49:05 crc kubenswrapper[4813]: E1125 10:49:05.880101 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:49:14 crc kubenswrapper[4813]: I1125 10:49:14.365411 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:49:16 crc kubenswrapper[4813]: E1125 10:49:16.203081 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d" Nov 25 10:49:16 crc kubenswrapper[4813]: E1125 10:49:16.203594 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6fmxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-cwrzw_openstack-operators(49b29226-49bf-4d59-9c7f-998d924bdace): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:49:16 crc kubenswrapper[4813]: E1125 10:49:16.710102 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f" Nov 25 10:49:16 crc kubenswrapper[4813]: E1125 10:49:16.710673 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6tjz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:49:16 crc kubenswrapper[4813]: I1125 10:49:16.952762 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" event={"ID":"baf6f7bb-db50-4013-8b77-2b7e4c8101c2","Type":"ContainerStarted","Data":"76db09afdbd7878d0e725a85cae6ef51ea46a3e2f3750023c641aa307705dc45"} Nov 25 10:49:19 crc kubenswrapper[4813]: I1125 10:49:19.021992 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:49:19 crc kubenswrapper[4813]: E1125 10:49:19.182348 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 10:49:19 crc kubenswrapper[4813]: E1125 10:49:19.182822 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w5m8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 25 10:49:19 crc kubenswrapper[4813]: E1125 10:49:19.184483 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podUID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" Nov 25 10:49:19 crc kubenswrapper[4813]: E1125 10:49:19.896714 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 25 10:49:19 crc kubenswrapper[4813]: E1125 10:49:19.897156 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67krk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-qd4tx_openstack-operators(2bf03402-32ec-423d-a6af-657bc0cfeb15): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:49:19 crc kubenswrapper[4813]: E1125 10:49:19.898728 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:49:21 crc kubenswrapper[4813]: E1125 10:49:21.040429 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 10:49:21 crc kubenswrapper[4813]: E1125 10:49:21.040620 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vlhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-774b86978c-f6dvp_openstack-operators(eaf6f1c0-6585-4eba-8baf-942ed2503735): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:49:21 crc kubenswrapper[4813]: E1125 10:49:21.041866 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" podUID="eaf6f1c0-6585-4eba-8baf-942ed2503735" Nov 25 10:49:21 crc kubenswrapper[4813]: E1125 10:49:21.493145 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" podUID="49b29226-49bf-4d59-9c7f-998d924bdace" Nov 25 10:49:21 crc kubenswrapper[4813]: E1125 10:49:21.558659 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce\": context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 10:49:21 crc kubenswrapper[4813]: E1125 10:49:21.558855 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jltmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-544b9bb9-v2clw_openstack-operators(0a946ff2-f2e3-48c2-ae3b-774a4ea85492): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce\": context canceled" logger="UnhandledError" Nov 25 10:49:21 crc kubenswrapper[4813]: E1125 10:49:21.560205 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce: Get \\\"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce\\\": context canceled\"]" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" podUID="0a946ff2-f2e3-48c2-ae3b-774a4ea85492" Nov 25 10:49:21 crc kubenswrapper[4813]: E1125 10:49:21.913025 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:49:21 crc kubenswrapper[4813]: I1125 10:49:21.967139 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:49:21 crc kubenswrapper[4813]: I1125 10:49:21.967192 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:49:21 crc kubenswrapper[4813]: E1125 10:49:21.980931 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 10:49:21 crc kubenswrapper[4813]: E1125 10:49:21.981126 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mkdps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-547cf68667-6v6dd_openstack-operators(71c5bfc5-a289-4942-bc55-819f06787eb6): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 25 10:49:21 crc kubenswrapper[4813]: E1125 10:49:21.982834 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" podUID="71c5bfc5-a289-4942-bc55-819f06787eb6" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.071755 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" event={"ID":"06c81a1e-0461-4457-85ea-1a4060423eda","Type":"ContainerStarted","Data":"25cb68c2c1ba6f2ddb394aec4e209618af74d070a3510b09204ce153745f13d2"} Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.073204 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.094019 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.098245 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" event={"ID":"b69526d6-6616-4536-a228-4cdb57e1881c","Type":"ContainerStarted","Data":"bef49ce6e049d09c22983807f8dc3cb5782592e9d985690c5f975f700705ac2e"} Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.098711 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.110471 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" podStartSLOduration=2.799078821 podStartE2EDuration="38.110452036s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:45.847671362 +0000 UTC m=+1022.977381248" lastFinishedPulling="2025-11-25 10:49:21.159044577 +0000 UTC m=+1058.288754463" observedRunningTime="2025-11-25 10:49:22.108796439 +0000 UTC m=+1059.238506355" watchObservedRunningTime="2025-11-25 10:49:22.110452036 +0000 UTC m=+1059.240161922" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.110928 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.129560 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" event={"ID":"a31ffbb8-0255-45d6-9125-6cccc7b444ba","Type":"ContainerStarted","Data":"1dbf889f056b97b4f214ab47fb9b7e636fe8be06bd13c37c369995870da2984b"} Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.130772 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.159728 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.165227 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" event={"ID":"5f9254c7-c8dc-4504-bdf5-264c78e03b0c","Type":"ContainerStarted","Data":"0fcc2079ff1fdd7cbe49fdf102337aabafc61041b15697fb4c86647a8510453d"} Nov 25 10:49:22 crc kubenswrapper[4813]: E1125 10:49:22.173012 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.207973 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podStartSLOduration=3.306479665 podStartE2EDuration="38.207950016s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.329982884 +0000 UTC m=+1023.459692770" lastFinishedPulling="2025-11-25 10:49:21.231453235 +0000 UTC m=+1058.361163121" observedRunningTime="2025-11-25 10:49:22.194430842 +0000 UTC m=+1059.324140738" watchObservedRunningTime="2025-11-25 10:49:22.207950016 +0000 UTC m=+1059.337659902" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.232868 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" event={"ID":"efca9205-8a59-45ce-8c50-36b0d0389f12","Type":"ContainerStarted","Data":"fc1911c2b27ad29c161d9990b55ad86092bae66add0055d78f27c48805c49cd4"} Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.233554 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.239462 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.299388 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" event={"ID":"48ea1018-a88f-4ef0-a82f-7e3b012522ec","Type":"ContainerStarted","Data":"14eb11c01b6d36ebc30b5c1849b014d01064dc504a5e87489bc440df8d2dba84"} Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.300642 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.315390 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" podStartSLOduration=3.066195988 podStartE2EDuration="38.315362377s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.262487176 +0000 UTC m=+1023.392197062" lastFinishedPulling="2025-11-25 10:49:21.511653555 +0000 UTC m=+1058.641363451" observedRunningTime="2025-11-25 10:49:22.298609582 +0000 UTC m=+1059.428319488" watchObservedRunningTime="2025-11-25 10:49:22.315362377 +0000 UTC m=+1059.445072263" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.334665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" event={"ID":"03c63a63-9a46-4bda-941b-8c5ba81a13fe","Type":"ContainerStarted","Data":"b023aa9be5c0e7be8472ad4cfd84d978f554e675dc3478167574920b8814ab44"} Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.336166 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.344013 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.373947 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podStartSLOduration=3.186222349 podStartE2EDuration="38.373926371s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.266100359 +0000 UTC m=+1023.395810245" lastFinishedPulling="2025-11-25 10:49:21.453804381 +0000 UTC m=+1058.583514267" observedRunningTime="2025-11-25 10:49:22.364136053 +0000 UTC m=+1059.493845949" watchObservedRunningTime="2025-11-25 10:49:22.373926371 +0000 UTC m=+1059.503636257" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.389069 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" event={"ID":"baf6f7bb-db50-4013-8b77-2b7e4c8101c2","Type":"ContainerStarted","Data":"e203cddab4e9f54fd904adece29ce70ed48daf958a22a332a9d946e2f54de662"} Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.389611 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.412137 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" podStartSLOduration=3.57666342 podStartE2EDuration="38.412115026s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.262247219 +0000 UTC m=+1023.391957105" lastFinishedPulling="2025-11-25 10:49:21.097698825 +0000 UTC m=+1058.227408711" observedRunningTime="2025-11-25 10:49:22.408487033 +0000 UTC m=+1059.538196929" watchObservedRunningTime="2025-11-25 10:49:22.412115026 +0000 UTC m=+1059.541824912" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.426106 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:49:22 crc kubenswrapper[4813]: E1125 10:49:22.426588 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.452380 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" podStartSLOduration=3.383684858 podStartE2EDuration="39.452361199s" podCreationTimestamp="2025-11-25 10:48:43 +0000 UTC" firstStartedPulling="2025-11-25 10:48:45.199845927 +0000 UTC m=+1022.329555823" lastFinishedPulling="2025-11-25 10:49:21.268522278 +0000 UTC m=+1058.398232164" observedRunningTime="2025-11-25 10:49:22.451188826 +0000 UTC m=+1059.580898732" watchObservedRunningTime="2025-11-25 10:49:22.452361199 +0000 UTC m=+1059.582071095" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.455059 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" event={"ID":"af18e07e-95b3-476f-9604-824c36ae74a5","Type":"ContainerStarted","Data":"eaea2d03640ea3edd8293b2a434ff622f073a2b6b8b30ae43ba04106c22ca184"} Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.457624 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.466373 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.491457 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podStartSLOduration=3.663889039 podStartE2EDuration="38.49143964s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.388011352 +0000 UTC m=+1023.517721238" lastFinishedPulling="2025-11-25 10:49:21.215561953 +0000 UTC m=+1058.345271839" observedRunningTime="2025-11-25 10:49:22.491341567 +0000 UTC m=+1059.621051473" watchObservedRunningTime="2025-11-25 10:49:22.49143964 +0000 UTC m=+1059.621149526" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.521109 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" event={"ID":"94c3d2b4-f1bb-402d-a39d-78e16bee970b","Type":"ContainerStarted","Data":"0026bbbea81cbbb771cdf9249b0b33abd23829a13a3add52530f09e64d6fa233"} Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.522434 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.540616 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.557988 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" event={"ID":"49b29226-49bf-4d59-9c7f-998d924bdace","Type":"ContainerStarted","Data":"5e988c71530c093f9bf2c372726eea5513c2e9e6a3db514a063d74856a91cb71"} Nov 25 10:49:22 crc kubenswrapper[4813]: E1125 10:49:22.562632 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" podUID="49b29226-49bf-4d59-9c7f-998d924bdace" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.606123 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" podStartSLOduration=3.044183123 podStartE2EDuration="38.606099887s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:45.844994845 +0000 UTC m=+1022.974704731" lastFinishedPulling="2025-11-25 10:49:21.406911609 +0000 UTC m=+1058.536621495" observedRunningTime="2025-11-25 10:49:22.583203087 +0000 UTC m=+1059.712912993" watchObservedRunningTime="2025-11-25 10:49:22.606099887 +0000 UTC m=+1059.735809773" Nov 25 10:49:22 crc kubenswrapper[4813]: I1125 10:49:22.744425 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" podStartSLOduration=3.849119881 podStartE2EDuration="38.744404776s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.270835333 +0000 UTC m=+1023.400545219" lastFinishedPulling="2025-11-25 10:49:21.166120228 +0000 UTC m=+1058.295830114" observedRunningTime="2025-11-25 10:49:22.711779149 +0000 UTC m=+1059.841489035" watchObservedRunningTime="2025-11-25 10:49:22.744404776 +0000 UTC m=+1059.874114662" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.567120 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" event={"ID":"eaf6f1c0-6585-4eba-8baf-942ed2503735","Type":"ContainerStarted","Data":"14d85cf797040f5e876e0b4fb7f60bb8b90a3270883a6c7d69f506951107a454"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.567197 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" event={"ID":"eaf6f1c0-6585-4eba-8baf-942ed2503735","Type":"ContainerStarted","Data":"2d86425d39a76afbae7bbc79c5701956f6ea1837959e4367e78b7b49ded3ad6c"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.567440 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.569484 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" event={"ID":"db556642-a360-4559-8cde-7c25d7a893e0","Type":"ContainerStarted","Data":"761d717f4748d7b37a2b1ef5dedc100b327a3d1409912fe132211587e13c534c"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.569935 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.571242 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" event={"ID":"71c5bfc5-a289-4942-bc55-819f06787eb6","Type":"ContainerStarted","Data":"62d41f37fe6bcb2aaaa8a74acc2d4ef697ba70c05496c4bfd24aa886b24309fb"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.571294 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" event={"ID":"71c5bfc5-a289-4942-bc55-819f06787eb6","Type":"ContainerStarted","Data":"97c3da655b8a78107c3d2c8c2e2c1a9f338de1a59c124f53ea5084c05a530049"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.571733 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.576703 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.577571 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" event={"ID":"a650bdd3-2541-4b76-b5db-64273262bc06","Type":"ContainerStarted","Data":"043ec2c1ea307987fdc54632852b9ca14629f9f7a99464a13c73175d59bd567a"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.579293 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" event={"ID":"9374bbb0-b458-4c1c-a327-67bcbea83045","Type":"ContainerStarted","Data":"5d4c93550a9405427fdab0a7cce52312f48bde00a0c89c4132a52fcf1e9ac0f9"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.581305 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" event={"ID":"7921584b-8ce0-45b8-8a56-ab0fdde43582","Type":"ContainerStarted","Data":"5825b08b2246dd072e1f18ddfd4324f7e35502885dfaba472ebde237f2205e86"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.581483 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.585203 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.586159 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" event={"ID":"d4a62556-e6e8-42dc-b7e4-180c40611393","Type":"ContainerStarted","Data":"e52a70c2bf2be06fecaa28478f3d9bdbf853071d7be004473028c451bdcb984c"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.586395 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.587616 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" event={"ID":"0a946ff2-f2e3-48c2-ae3b-774a4ea85492","Type":"ContainerStarted","Data":"e3bbdce0fdd885f2702d09e3ce60b294f195cf38c6f9d83a1dc5f311ba72589a"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.588257 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.589665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" event={"ID":"aa2934d9-d547-49d0-9d06-232120b44fa1","Type":"ContainerStarted","Data":"a391f511bb40e70ad9382bfef2c9b8c79c8974522f010c84a96f91e4593926ff"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.589835 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.591283 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.591374 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" event={"ID":"48ea1018-a88f-4ef0-a82f-7e3b012522ec","Type":"ContainerStarted","Data":"5bf8c49ab328982736c2d179c49a6612e103261ba0bf30c51115c321c962cf85"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.595493 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" event={"ID":"9093a664-86f3-4349-bd13-0a5e4aca8036","Type":"ContainerStarted","Data":"e71cd9c8649a75aa1002399121e3ef4a80b78699c014981528559ee34f4a2247"} Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.597774 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.605510 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.610406 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" podStartSLOduration=2.237790614 podStartE2EDuration="39.610384468s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:45.657407816 +0000 UTC m=+1022.787117702" lastFinishedPulling="2025-11-25 10:49:23.03000167 +0000 UTC m=+1060.159711556" observedRunningTime="2025-11-25 10:49:23.606006274 +0000 UTC m=+1060.735716170" watchObservedRunningTime="2025-11-25 10:49:23.610384468 +0000 UTC m=+1060.740094364" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.634382 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" podStartSLOduration=4.058908931 podStartE2EDuration="39.634366919s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.387543439 +0000 UTC m=+1023.517253325" lastFinishedPulling="2025-11-25 10:49:21.963001427 +0000 UTC m=+1059.092711313" observedRunningTime="2025-11-25 10:49:23.63298425 +0000 UTC m=+1060.762694136" watchObservedRunningTime="2025-11-25 10:49:23.634366919 +0000 UTC m=+1060.764076805" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.694145 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" podStartSLOduration=3.247463628 podStartE2EDuration="39.694122287s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:45.309625816 +0000 UTC m=+1022.439335702" lastFinishedPulling="2025-11-25 10:49:21.756284475 +0000 UTC m=+1058.885994361" observedRunningTime="2025-11-25 10:49:23.660125891 +0000 UTC m=+1060.789835807" watchObservedRunningTime="2025-11-25 10:49:23.694122287 +0000 UTC m=+1060.823832183" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.730267 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podStartSLOduration=3.797605577 podStartE2EDuration="39.730247393s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.262139886 +0000 UTC m=+1023.391849772" lastFinishedPulling="2025-11-25 10:49:22.194781692 +0000 UTC m=+1059.324491588" observedRunningTime="2025-11-25 10:49:23.725987632 +0000 UTC m=+1060.855697538" watchObservedRunningTime="2025-11-25 10:49:23.730247393 +0000 UTC m=+1060.859957289" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.751310 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" podStartSLOduration=3.562340794 podStartE2EDuration="39.751288831s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.261971022 +0000 UTC m=+1023.391680908" lastFinishedPulling="2025-11-25 10:49:22.450919059 +0000 UTC m=+1059.580628945" observedRunningTime="2025-11-25 10:49:23.751012523 +0000 UTC m=+1060.880722409" watchObservedRunningTime="2025-11-25 10:49:23.751288831 +0000 UTC m=+1060.880998727" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.781868 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" podStartSLOduration=2.208128381 podStartE2EDuration="39.781844319s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:45.178092019 +0000 UTC m=+1022.307801905" lastFinishedPulling="2025-11-25 10:49:22.751807957 +0000 UTC m=+1059.881517843" observedRunningTime="2025-11-25 10:49:23.77763765 +0000 UTC m=+1060.907347566" watchObservedRunningTime="2025-11-25 10:49:23.781844319 +0000 UTC m=+1060.911554205" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.811622 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" podStartSLOduration=4.012837413 podStartE2EDuration="39.811605205s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.25523069 +0000 UTC m=+1023.384940576" lastFinishedPulling="2025-11-25 10:49:22.053998492 +0000 UTC m=+1059.183708368" observedRunningTime="2025-11-25 10:49:23.809500905 +0000 UTC m=+1060.939210821" watchObservedRunningTime="2025-11-25 10:49:23.811605205 +0000 UTC m=+1060.941315091" Nov 25 10:49:23 crc kubenswrapper[4813]: I1125 10:49:23.868786 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podStartSLOduration=3.400066993 podStartE2EDuration="39.868764448s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:45.284668537 +0000 UTC m=+1022.414378433" lastFinishedPulling="2025-11-25 10:49:21.753366002 +0000 UTC m=+1058.883075888" observedRunningTime="2025-11-25 10:49:23.857018975 +0000 UTC m=+1060.986728871" watchObservedRunningTime="2025-11-25 10:49:23.868764448 +0000 UTC m=+1060.998474334" Nov 25 10:49:24 crc kubenswrapper[4813]: I1125 10:49:24.607819 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" event={"ID":"0a946ff2-f2e3-48c2-ae3b-774a4ea85492","Type":"ContainerStarted","Data":"4329d37a239aebb395e6202ba5bfa696cff2d67c5fb182503126cf6e5102191b"} Nov 25 10:49:24 crc kubenswrapper[4813]: I1125 10:49:24.608220 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:49:24 crc kubenswrapper[4813]: I1125 10:49:24.610117 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" event={"ID":"9374bbb0-b458-4c1c-a327-67bcbea83045","Type":"ContainerStarted","Data":"e1befc29d4e04337c0bac8394622429b45672d1ff678eecd535a149b5a3d829d"} Nov 25 10:49:24 crc kubenswrapper[4813]: I1125 10:49:24.611277 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:49:24 crc kubenswrapper[4813]: I1125 10:49:24.637129 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" podStartSLOduration=4.763049874 podStartE2EDuration="40.637108655s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:47.235079417 +0000 UTC m=+1024.364789303" lastFinishedPulling="2025-11-25 10:49:23.109138198 +0000 UTC m=+1060.238848084" observedRunningTime="2025-11-25 10:49:24.634234184 +0000 UTC m=+1061.763944080" watchObservedRunningTime="2025-11-25 10:49:24.637108655 +0000 UTC m=+1061.766818541" Nov 25 10:49:24 crc kubenswrapper[4813]: I1125 10:49:24.656481 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podStartSLOduration=2.47557394 podStartE2EDuration="40.656461275s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.183738469 +0000 UTC m=+1023.313448355" lastFinishedPulling="2025-11-25 10:49:24.364625804 +0000 UTC m=+1061.494335690" observedRunningTime="2025-11-25 10:49:24.651531155 +0000 UTC m=+1061.781241081" watchObservedRunningTime="2025-11-25 10:49:24.656461275 +0000 UTC m=+1061.786171191" Nov 25 10:49:31 crc kubenswrapper[4813]: E1125 10:49:31.623888 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:49:34 crc kubenswrapper[4813]: I1125 10:49:34.417790 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:49:34 crc kubenswrapper[4813]: I1125 10:49:34.462872 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:49:34 crc kubenswrapper[4813]: E1125 10:49:34.623090 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:49:34 crc kubenswrapper[4813]: I1125 10:49:34.836869 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:49:35 crc kubenswrapper[4813]: I1125 10:49:35.228144 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:49:36 crc kubenswrapper[4813]: I1125 10:49:36.467282 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.037363 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.038607 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.043381 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.043591 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.058977 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.138804 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1e1d26e-c928-4d2f-8408-b4617fa42528-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a1e1d26e-c928-4d2f-8408-b4617fa42528\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.139227 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1e1d26e-c928-4d2f-8408-b4617fa42528-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a1e1d26e-c928-4d2f-8408-b4617fa42528\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.240974 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1e1d26e-c928-4d2f-8408-b4617fa42528-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a1e1d26e-c928-4d2f-8408-b4617fa42528\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.241073 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1e1d26e-c928-4d2f-8408-b4617fa42528-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a1e1d26e-c928-4d2f-8408-b4617fa42528\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.241191 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1e1d26e-c928-4d2f-8408-b4617fa42528-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a1e1d26e-c928-4d2f-8408-b4617fa42528\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.264254 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1e1d26e-c928-4d2f-8408-b4617fa42528-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a1e1d26e-c928-4d2f-8408-b4617fa42528\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.372777 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.714111 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" event={"ID":"49b29226-49bf-4d59-9c7f-998d924bdace","Type":"ContainerStarted","Data":"28398be9153460c6be147a14357556d4ffc02fadeb49a7434f9278106ec30e34"} Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.714662 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.736257 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" podStartSLOduration=2.972389463 podStartE2EDuration="55.736231697s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.284986985 +0000 UTC m=+1023.414696871" lastFinishedPulling="2025-11-25 10:49:39.048829219 +0000 UTC m=+1076.178539105" observedRunningTime="2025-11-25 10:49:39.731883074 +0000 UTC m=+1076.861592980" watchObservedRunningTime="2025-11-25 10:49:39.736231697 +0000 UTC m=+1076.865941583" Nov 25 10:49:39 crc kubenswrapper[4813]: I1125 10:49:39.830530 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 10:49:39 crc kubenswrapper[4813]: W1125 10:49:39.835886 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda1e1d26e_c928_4d2f_8408_b4617fa42528.slice/crio-e8f78c0db20196c11238833b74016698a73f1af85a6a65415e772f6faa14e805 WatchSource:0}: Error finding container e8f78c0db20196c11238833b74016698a73f1af85a6a65415e772f6faa14e805: Status 404 returned error can't find the container with id e8f78c0db20196c11238833b74016698a73f1af85a6a65415e772f6faa14e805 Nov 25 10:49:40 crc kubenswrapper[4813]: I1125 10:49:40.723519 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a1e1d26e-c928-4d2f-8408-b4617fa42528","Type":"ContainerStarted","Data":"b4111db53ffb8d8d3f490a269d9eba4c229592df7031160a7046d400ef740f79"} Nov 25 10:49:40 crc kubenswrapper[4813]: I1125 10:49:40.723933 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a1e1d26e-c928-4d2f-8408-b4617fa42528","Type":"ContainerStarted","Data":"e8f78c0db20196c11238833b74016698a73f1af85a6a65415e772f6faa14e805"} Nov 25 10:49:40 crc kubenswrapper[4813]: I1125 10:49:40.744092 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.744073909 podStartE2EDuration="1.744073909s" podCreationTimestamp="2025-11-25 10:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:49:40.738326676 +0000 UTC m=+1077.868036572" watchObservedRunningTime="2025-11-25 10:49:40.744073909 +0000 UTC m=+1077.873783795" Nov 25 10:49:41 crc kubenswrapper[4813]: I1125 10:49:41.736453 4813 generic.go:334] "Generic (PLEG): container finished" podID="a1e1d26e-c928-4d2f-8408-b4617fa42528" containerID="b4111db53ffb8d8d3f490a269d9eba4c229592df7031160a7046d400ef740f79" exitCode=0 Nov 25 10:49:41 crc kubenswrapper[4813]: I1125 10:49:41.736501 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a1e1d26e-c928-4d2f-8408-b4617fa42528","Type":"ContainerDied","Data":"b4111db53ffb8d8d3f490a269d9eba4c229592df7031160a7046d400ef740f79"} Nov 25 10:49:43 crc kubenswrapper[4813]: I1125 10:49:43.010042 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 10:49:43 crc kubenswrapper[4813]: I1125 10:49:43.202213 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1e1d26e-c928-4d2f-8408-b4617fa42528-kube-api-access\") pod \"a1e1d26e-c928-4d2f-8408-b4617fa42528\" (UID: \"a1e1d26e-c928-4d2f-8408-b4617fa42528\") " Nov 25 10:49:43 crc kubenswrapper[4813]: I1125 10:49:43.202268 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1e1d26e-c928-4d2f-8408-b4617fa42528-kubelet-dir\") pod \"a1e1d26e-c928-4d2f-8408-b4617fa42528\" (UID: \"a1e1d26e-c928-4d2f-8408-b4617fa42528\") " Nov 25 10:49:43 crc kubenswrapper[4813]: I1125 10:49:43.202441 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1e1d26e-c928-4d2f-8408-b4617fa42528-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a1e1d26e-c928-4d2f-8408-b4617fa42528" (UID: "a1e1d26e-c928-4d2f-8408-b4617fa42528"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:49:43 crc kubenswrapper[4813]: I1125 10:49:43.202773 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1e1d26e-c928-4d2f-8408-b4617fa42528-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 10:49:43 crc kubenswrapper[4813]: I1125 10:49:43.208360 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e1d26e-c928-4d2f-8408-b4617fa42528-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a1e1d26e-c928-4d2f-8408-b4617fa42528" (UID: "a1e1d26e-c928-4d2f-8408-b4617fa42528"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:49:43 crc kubenswrapper[4813]: I1125 10:49:43.304876 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1e1d26e-c928-4d2f-8408-b4617fa42528-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 10:49:43 crc kubenswrapper[4813]: I1125 10:49:43.766944 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" event={"ID":"2bf03402-32ec-423d-a6af-657bc0cfeb15","Type":"ContainerStarted","Data":"f5c803b338997a2127e546a385ad4b1241de953818b9e737d25f6a6ed6ccb80d"} Nov 25 10:49:43 crc kubenswrapper[4813]: I1125 10:49:43.771654 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a1e1d26e-c928-4d2f-8408-b4617fa42528","Type":"ContainerDied","Data":"e8f78c0db20196c11238833b74016698a73f1af85a6a65415e772f6faa14e805"} Nov 25 10:49:43 crc kubenswrapper[4813]: I1125 10:49:43.771731 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f78c0db20196c11238833b74016698a73f1af85a6a65415e772f6faa14e805" Nov 25 10:49:43 crc kubenswrapper[4813]: I1125 10:49:43.771835 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 10:49:43 crc kubenswrapper[4813]: I1125 10:49:43.793975 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podStartSLOduration=2.8225267670000003 podStartE2EDuration="59.793952965s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.28232319 +0000 UTC m=+1023.412033076" lastFinishedPulling="2025-11-25 10:49:43.253749388 +0000 UTC m=+1080.383459274" observedRunningTime="2025-11-25 10:49:43.787426569 +0000 UTC m=+1080.917136495" watchObservedRunningTime="2025-11-25 10:49:43.793952965 +0000 UTC m=+1080.923662851" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.167907 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.641713 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 10:49:45 crc kubenswrapper[4813]: E1125 10:49:45.642302 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e1d26e-c928-4d2f-8408-b4617fa42528" containerName="pruner" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.642324 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e1d26e-c928-4d2f-8408-b4617fa42528" containerName="pruner" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.642501 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e1d26e-c928-4d2f-8408-b4617fa42528" containerName="pruner" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.643047 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.646754 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.646829 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.654087 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.744083 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c3ebcfb-71d9-4d57-824a-b6468b15791e-var-lock\") pod \"installer-9-crc\" (UID: \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.744120 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c3ebcfb-71d9-4d57-824a-b6468b15791e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.744197 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c3ebcfb-71d9-4d57-824a-b6468b15791e-kube-api-access\") pod \"installer-9-crc\" (UID: \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.847639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c3ebcfb-71d9-4d57-824a-b6468b15791e-kube-api-access\") pod \"installer-9-crc\" (UID: \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.847841 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c3ebcfb-71d9-4d57-824a-b6468b15791e-var-lock\") pod \"installer-9-crc\" (UID: \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.847876 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c3ebcfb-71d9-4d57-824a-b6468b15791e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.848007 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c3ebcfb-71d9-4d57-824a-b6468b15791e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.848061 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c3ebcfb-71d9-4d57-824a-b6468b15791e-var-lock\") pod \"installer-9-crc\" (UID: \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:49:45 crc kubenswrapper[4813]: I1125 10:49:45.868578 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c3ebcfb-71d9-4d57-824a-b6468b15791e-kube-api-access\") pod \"installer-9-crc\" (UID: \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:49:46 crc kubenswrapper[4813]: I1125 10:49:46.005454 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:49:46 crc kubenswrapper[4813]: I1125 10:49:46.437520 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 10:49:46 crc kubenswrapper[4813]: I1125 10:49:46.793448 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2c3ebcfb-71d9-4d57-824a-b6468b15791e","Type":"ContainerStarted","Data":"c6384e09d9250afe7588a52052612bb78f193e4cfbf325d504522d3f5ec80a63"} Nov 25 10:49:46 crc kubenswrapper[4813]: I1125 10:49:46.795672 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" event={"ID":"5f9254c7-c8dc-4504-bdf5-264c78e03b0c","Type":"ContainerStarted","Data":"e5cae6caa5898754bc6fa96cd83b7f4f38ebb445710e2457a04bde21b6f3350e"} Nov 25 10:49:46 crc kubenswrapper[4813]: I1125 10:49:46.795893 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:49:46 crc kubenswrapper[4813]: I1125 10:49:46.815911 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podStartSLOduration=3.115182821 podStartE2EDuration="1m2.815892045s" podCreationTimestamp="2025-11-25 10:48:44 +0000 UTC" firstStartedPulling="2025-11-25 10:48:46.466590585 +0000 UTC m=+1023.596300471" lastFinishedPulling="2025-11-25 10:49:46.167299809 +0000 UTC m=+1083.297009695" observedRunningTime="2025-11-25 10:49:46.809123203 +0000 UTC m=+1083.938833109" watchObservedRunningTime="2025-11-25 10:49:46.815892045 +0000 UTC m=+1083.945601931" Nov 25 10:49:47 crc kubenswrapper[4813]: I1125 10:49:47.805163 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2c3ebcfb-71d9-4d57-824a-b6468b15791e","Type":"ContainerStarted","Data":"e997765f737a2fde8118b784a45edcef8e97712647cf86833d19264a8150d1c3"} Nov 25 10:49:47 crc kubenswrapper[4813]: I1125 10:49:47.830231 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.830211131 podStartE2EDuration="2.830211131s" podCreationTimestamp="2025-11-25 10:49:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:49:47.829304616 +0000 UTC m=+1084.959014502" watchObservedRunningTime="2025-11-25 10:49:47.830211131 +0000 UTC m=+1084.959921027" Nov 25 10:49:51 crc kubenswrapper[4813]: I1125 10:49:51.967184 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:49:51 crc kubenswrapper[4813]: I1125 10:49:51.967515 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:49:51 crc kubenswrapper[4813]: I1125 10:49:51.967559 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:49:51 crc kubenswrapper[4813]: I1125 10:49:51.968339 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efbe54cb2ef6c89c7fb03c162ec904d1deff9a1b48f07c1332fb33b84a4f4c6c"} pod="openshift-machine-config-operator/machine-config-daemon-knhz8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:49:51 crc kubenswrapper[4813]: I1125 10:49:51.968412 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" containerID="cri-o://efbe54cb2ef6c89c7fb03c162ec904d1deff9a1b48f07c1332fb33b84a4f4c6c" gracePeriod=600 Nov 25 10:49:52 crc kubenswrapper[4813]: I1125 10:49:52.837566 4813 generic.go:334] "Generic (PLEG): container finished" podID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerID="efbe54cb2ef6c89c7fb03c162ec904d1deff9a1b48f07c1332fb33b84a4f4c6c" exitCode=0 Nov 25 10:49:52 crc kubenswrapper[4813]: I1125 10:49:52.837627 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerDied","Data":"efbe54cb2ef6c89c7fb03c162ec904d1deff9a1b48f07c1332fb33b84a4f4c6c"} Nov 25 10:49:52 crc kubenswrapper[4813]: I1125 10:49:52.837966 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerStarted","Data":"aa994bf2afc77b306a9a9dd90fad6893b4b3c7e60546773c6c8bfb41dfb47486"} Nov 25 10:49:52 crc kubenswrapper[4813]: I1125 10:49:52.837986 4813 scope.go:117] "RemoveContainer" containerID="94199ba3a0acbc10bf1b1d8a9e55614a98ff3a435215d0c63b967639b76f1985" Nov 25 10:49:55 crc kubenswrapper[4813]: I1125 10:49:55.062348 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.681370 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wn82k"] Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.683301 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.686355 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.686451 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.686468 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.686418 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-9d9wl" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.695016 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wn82k"] Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.743363 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5ghf"] Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.744651 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.747135 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.757576 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5ghf"] Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.794262 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f8a703-848a-4de9-a102-81426dcd6c3a-config\") pod \"dnsmasq-dns-675f4bcbfc-wn82k\" (UID: \"69f8a703-848a-4de9-a102-81426dcd6c3a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.794386 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxcrh\" (UniqueName: \"kubernetes.io/projected/69f8a703-848a-4de9-a102-81426dcd6c3a-kube-api-access-zxcrh\") pod \"dnsmasq-dns-675f4bcbfc-wn82k\" (UID: \"69f8a703-848a-4de9-a102-81426dcd6c3a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.896448 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxcrh\" (UniqueName: \"kubernetes.io/projected/69f8a703-848a-4de9-a102-81426dcd6c3a-kube-api-access-zxcrh\") pod \"dnsmasq-dns-675f4bcbfc-wn82k\" (UID: \"69f8a703-848a-4de9-a102-81426dcd6c3a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.896586 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62da3927-ddca-4922-8e9b-c96d06c44c31-config\") pod \"dnsmasq-dns-78dd6ddcc-s5ghf\" (UID: \"62da3927-ddca-4922-8e9b-c96d06c44c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.896616 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkv86\" (UniqueName: \"kubernetes.io/projected/62da3927-ddca-4922-8e9b-c96d06c44c31-kube-api-access-pkv86\") pod \"dnsmasq-dns-78dd6ddcc-s5ghf\" (UID: \"62da3927-ddca-4922-8e9b-c96d06c44c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.896653 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f8a703-848a-4de9-a102-81426dcd6c3a-config\") pod \"dnsmasq-dns-675f4bcbfc-wn82k\" (UID: \"69f8a703-848a-4de9-a102-81426dcd6c3a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.896782 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62da3927-ddca-4922-8e9b-c96d06c44c31-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-s5ghf\" (UID: \"62da3927-ddca-4922-8e9b-c96d06c44c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.897628 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f8a703-848a-4de9-a102-81426dcd6c3a-config\") pod \"dnsmasq-dns-675f4bcbfc-wn82k\" (UID: \"69f8a703-848a-4de9-a102-81426dcd6c3a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.921111 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxcrh\" (UniqueName: \"kubernetes.io/projected/69f8a703-848a-4de9-a102-81426dcd6c3a-kube-api-access-zxcrh\") pod \"dnsmasq-dns-675f4bcbfc-wn82k\" (UID: \"69f8a703-848a-4de9-a102-81426dcd6c3a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.998754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62da3927-ddca-4922-8e9b-c96d06c44c31-config\") pod \"dnsmasq-dns-78dd6ddcc-s5ghf\" (UID: \"62da3927-ddca-4922-8e9b-c96d06c44c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.998857 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkv86\" (UniqueName: \"kubernetes.io/projected/62da3927-ddca-4922-8e9b-c96d06c44c31-kube-api-access-pkv86\") pod \"dnsmasq-dns-78dd6ddcc-s5ghf\" (UID: \"62da3927-ddca-4922-8e9b-c96d06c44c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.998899 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62da3927-ddca-4922-8e9b-c96d06c44c31-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-s5ghf\" (UID: \"62da3927-ddca-4922-8e9b-c96d06c44c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.999952 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62da3927-ddca-4922-8e9b-c96d06c44c31-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-s5ghf\" (UID: \"62da3927-ddca-4922-8e9b-c96d06c44c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:09 crc kubenswrapper[4813]: I1125 10:50:09.999961 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62da3927-ddca-4922-8e9b-c96d06c44c31-config\") pod \"dnsmasq-dns-78dd6ddcc-s5ghf\" (UID: \"62da3927-ddca-4922-8e9b-c96d06c44c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:10 crc kubenswrapper[4813]: I1125 10:50:10.003334 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" Nov 25 10:50:10 crc kubenswrapper[4813]: I1125 10:50:10.017970 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkv86\" (UniqueName: \"kubernetes.io/projected/62da3927-ddca-4922-8e9b-c96d06c44c31-kube-api-access-pkv86\") pod \"dnsmasq-dns-78dd6ddcc-s5ghf\" (UID: \"62da3927-ddca-4922-8e9b-c96d06c44c31\") " pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:10 crc kubenswrapper[4813]: I1125 10:50:10.063202 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:10 crc kubenswrapper[4813]: I1125 10:50:10.459055 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wn82k"] Nov 25 10:50:10 crc kubenswrapper[4813]: I1125 10:50:10.542410 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5ghf"] Nov 25 10:50:10 crc kubenswrapper[4813]: W1125 10:50:10.545874 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62da3927_ddca_4922_8e9b_c96d06c44c31.slice/crio-e5fc344c62bb3f5976f21dd1d9fb10e48119f7fe6e6763599c2c5b6bf1ea4320 WatchSource:0}: Error finding container e5fc344c62bb3f5976f21dd1d9fb10e48119f7fe6e6763599c2c5b6bf1ea4320: Status 404 returned error can't find the container with id e5fc344c62bb3f5976f21dd1d9fb10e48119f7fe6e6763599c2c5b6bf1ea4320 Nov 25 10:50:10 crc kubenswrapper[4813]: I1125 10:50:10.985118 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" event={"ID":"69f8a703-848a-4de9-a102-81426dcd6c3a","Type":"ContainerStarted","Data":"875c1ecba0feb11aac2fe4afeaee39d32b05e1599f00f3a508ab5f7b98b30e41"} Nov 25 10:50:10 crc kubenswrapper[4813]: I1125 10:50:10.986551 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" event={"ID":"62da3927-ddca-4922-8e9b-c96d06c44c31","Type":"ContainerStarted","Data":"e5fc344c62bb3f5976f21dd1d9fb10e48119f7fe6e6763599c2c5b6bf1ea4320"} Nov 25 10:50:12 crc kubenswrapper[4813]: I1125 10:50:12.793741 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wn82k"] Nov 25 10:50:12 crc kubenswrapper[4813]: I1125 10:50:12.828107 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hdpgf"] Nov 25 10:50:12 crc kubenswrapper[4813]: I1125 10:50:12.829379 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:50:12 crc kubenswrapper[4813]: I1125 10:50:12.845584 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hdpgf"] Nov 25 10:50:12 crc kubenswrapper[4813]: I1125 10:50:12.944928 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7467\" (UniqueName: \"kubernetes.io/projected/78498723-5c73-4aa4-8480-ef20ce8593ac-kube-api-access-q7467\") pod \"dnsmasq-dns-666b6646f7-hdpgf\" (UID: \"78498723-5c73-4aa4-8480-ef20ce8593ac\") " pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:50:12 crc kubenswrapper[4813]: I1125 10:50:12.945019 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78498723-5c73-4aa4-8480-ef20ce8593ac-config\") pod \"dnsmasq-dns-666b6646f7-hdpgf\" (UID: \"78498723-5c73-4aa4-8480-ef20ce8593ac\") " pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:50:12 crc kubenswrapper[4813]: I1125 10:50:12.945068 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78498723-5c73-4aa4-8480-ef20ce8593ac-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hdpgf\" (UID: \"78498723-5c73-4aa4-8480-ef20ce8593ac\") " pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.046326 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7467\" (UniqueName: \"kubernetes.io/projected/78498723-5c73-4aa4-8480-ef20ce8593ac-kube-api-access-q7467\") pod \"dnsmasq-dns-666b6646f7-hdpgf\" (UID: \"78498723-5c73-4aa4-8480-ef20ce8593ac\") " pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.046405 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78498723-5c73-4aa4-8480-ef20ce8593ac-config\") pod \"dnsmasq-dns-666b6646f7-hdpgf\" (UID: \"78498723-5c73-4aa4-8480-ef20ce8593ac\") " pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.046436 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78498723-5c73-4aa4-8480-ef20ce8593ac-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hdpgf\" (UID: \"78498723-5c73-4aa4-8480-ef20ce8593ac\") " pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.047505 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78498723-5c73-4aa4-8480-ef20ce8593ac-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hdpgf\" (UID: \"78498723-5c73-4aa4-8480-ef20ce8593ac\") " pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.048185 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78498723-5c73-4aa4-8480-ef20ce8593ac-config\") pod \"dnsmasq-dns-666b6646f7-hdpgf\" (UID: \"78498723-5c73-4aa4-8480-ef20ce8593ac\") " pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.098587 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7467\" (UniqueName: \"kubernetes.io/projected/78498723-5c73-4aa4-8480-ef20ce8593ac-kube-api-access-q7467\") pod \"dnsmasq-dns-666b6646f7-hdpgf\" (UID: \"78498723-5c73-4aa4-8480-ef20ce8593ac\") " pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.118091 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-s5ghf"] Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.158412 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6bkfh"] Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.160033 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.165406 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6bkfh"] Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.168460 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.250298 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe34d8fb-5b40-4191-8015-acb5ed8ea562-config\") pod \"dnsmasq-dns-57d769cc4f-6bkfh\" (UID: \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\") " pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.250386 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjvcc\" (UniqueName: \"kubernetes.io/projected/fe34d8fb-5b40-4191-8015-acb5ed8ea562-kube-api-access-tjvcc\") pod \"dnsmasq-dns-57d769cc4f-6bkfh\" (UID: \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\") " pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.250469 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe34d8fb-5b40-4191-8015-acb5ed8ea562-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6bkfh\" (UID: \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\") " pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.352360 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe34d8fb-5b40-4191-8015-acb5ed8ea562-config\") pod \"dnsmasq-dns-57d769cc4f-6bkfh\" (UID: \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\") " pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.352644 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjvcc\" (UniqueName: \"kubernetes.io/projected/fe34d8fb-5b40-4191-8015-acb5ed8ea562-kube-api-access-tjvcc\") pod \"dnsmasq-dns-57d769cc4f-6bkfh\" (UID: \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\") " pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.352819 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe34d8fb-5b40-4191-8015-acb5ed8ea562-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6bkfh\" (UID: \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\") " pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.353531 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe34d8fb-5b40-4191-8015-acb5ed8ea562-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6bkfh\" (UID: \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\") " pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.354753 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe34d8fb-5b40-4191-8015-acb5ed8ea562-config\") pod \"dnsmasq-dns-57d769cc4f-6bkfh\" (UID: \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\") " pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.379045 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjvcc\" (UniqueName: \"kubernetes.io/projected/fe34d8fb-5b40-4191-8015-acb5ed8ea562-kube-api-access-tjvcc\") pod \"dnsmasq-dns-57d769cc4f-6bkfh\" (UID: \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\") " pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.487234 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.590015 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hdpgf"] Nov 25 10:50:13 crc kubenswrapper[4813]: W1125 10:50:13.608048 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78498723_5c73_4aa4_8480_ef20ce8593ac.slice/crio-e22f617a5ceb793a81c6db1b441b011de6614d327888ab2d8c50d73caa8f76e7 WatchSource:0}: Error finding container e22f617a5ceb793a81c6db1b441b011de6614d327888ab2d8c50d73caa8f76e7: Status 404 returned error can't find the container with id e22f617a5ceb793a81c6db1b441b011de6614d327888ab2d8c50d73caa8f76e7 Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.980818 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.983042 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.991533 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.991824 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.991944 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.992049 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.992179 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.992358 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 10:50:13 crc kubenswrapper[4813]: I1125 10:50:13.992535 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nmx22" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.005767 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.030050 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" event={"ID":"78498723-5c73-4aa4-8480-ef20ce8593ac","Type":"ContainerStarted","Data":"e22f617a5ceb793a81c6db1b441b011de6614d327888ab2d8c50d73caa8f76e7"} Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.044001 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6bkfh"] Nov 25 10:50:14 crc kubenswrapper[4813]: W1125 10:50:14.044761 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe34d8fb_5b40_4191_8015_acb5ed8ea562.slice/crio-8efc4af9dd622d1354fc662e13203da3d0869a4c858a281e7d1e57f6f51500a6 WatchSource:0}: Error finding container 8efc4af9dd622d1354fc662e13203da3d0869a4c858a281e7d1e57f6f51500a6: Status 404 returned error can't find the container with id 8efc4af9dd622d1354fc662e13203da3d0869a4c858a281e7d1e57f6f51500a6 Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.068963 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.069216 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.069360 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.069449 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-config-data\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.069537 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.069646 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghnzx\" (UniqueName: \"kubernetes.io/projected/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-kube-api-access-ghnzx\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.069824 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.069956 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.070051 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.070148 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.070311 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.171324 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.171389 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.171422 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.171441 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.171463 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.171483 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.171520 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.171540 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-config-data\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.171561 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.171585 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghnzx\" (UniqueName: \"kubernetes.io/projected/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-kube-api-access-ghnzx\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.171609 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.172078 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.172718 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.173173 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.173226 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.174016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-config-data\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.174189 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.176976 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.177656 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.184099 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.184613 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.193208 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghnzx\" (UniqueName: \"kubernetes.io/projected/aea2efa1-cb45-4657-8ea6-efd7799cb0a4-kube-api-access-ghnzx\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.200142 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"aea2efa1-cb45-4657-8ea6-efd7799cb0a4\") " pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.268106 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.270542 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.278868 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.278914 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.279085 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.279133 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.279320 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-z8f8b" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.279333 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.279658 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.280855 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.328321 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.375741 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf91d2ed-6d43-49b1-8010-1f59f38aea76-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.375783 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf91d2ed-6d43-49b1-8010-1f59f38aea76-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.375825 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf91d2ed-6d43-49b1-8010-1f59f38aea76-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.376089 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf91d2ed-6d43-49b1-8010-1f59f38aea76-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.376204 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf91d2ed-6d43-49b1-8010-1f59f38aea76-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.376307 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf91d2ed-6d43-49b1-8010-1f59f38aea76-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.376365 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf91d2ed-6d43-49b1-8010-1f59f38aea76-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.376477 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf91d2ed-6d43-49b1-8010-1f59f38aea76-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.376505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6k89\" (UniqueName: \"kubernetes.io/projected/bf91d2ed-6d43-49b1-8010-1f59f38aea76-kube-api-access-z6k89\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.376538 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.376608 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf91d2ed-6d43-49b1-8010-1f59f38aea76-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.478860 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf91d2ed-6d43-49b1-8010-1f59f38aea76-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.479330 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6k89\" (UniqueName: \"kubernetes.io/projected/bf91d2ed-6d43-49b1-8010-1f59f38aea76-kube-api-access-z6k89\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.479373 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.479690 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf91d2ed-6d43-49b1-8010-1f59f38aea76-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.479735 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf91d2ed-6d43-49b1-8010-1f59f38aea76-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.479753 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf91d2ed-6d43-49b1-8010-1f59f38aea76-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.479788 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf91d2ed-6d43-49b1-8010-1f59f38aea76-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.480414 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf91d2ed-6d43-49b1-8010-1f59f38aea76-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.480452 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf91d2ed-6d43-49b1-8010-1f59f38aea76-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.480493 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf91d2ed-6d43-49b1-8010-1f59f38aea76-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.480514 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf91d2ed-6d43-49b1-8010-1f59f38aea76-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.480940 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf91d2ed-6d43-49b1-8010-1f59f38aea76-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.481005 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf91d2ed-6d43-49b1-8010-1f59f38aea76-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.481070 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.481653 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf91d2ed-6d43-49b1-8010-1f59f38aea76-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.481747 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf91d2ed-6d43-49b1-8010-1f59f38aea76-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.482581 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf91d2ed-6d43-49b1-8010-1f59f38aea76-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.491318 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf91d2ed-6d43-49b1-8010-1f59f38aea76-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.492343 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf91d2ed-6d43-49b1-8010-1f59f38aea76-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.493361 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf91d2ed-6d43-49b1-8010-1f59f38aea76-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.515154 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf91d2ed-6d43-49b1-8010-1f59f38aea76-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.515470 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6k89\" (UniqueName: \"kubernetes.io/projected/bf91d2ed-6d43-49b1-8010-1f59f38aea76-kube-api-access-z6k89\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.524398 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bf91d2ed-6d43-49b1-8010-1f59f38aea76\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.601726 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:50:14 crc kubenswrapper[4813]: I1125 10:50:14.737615 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.044232 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aea2efa1-cb45-4657-8ea6-efd7799cb0a4","Type":"ContainerStarted","Data":"c7ac6a6c9c3cab52f8e97e9bb06c04d268cdf7f7347bfc71e5119bce744bfb96"} Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.047412 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" event={"ID":"fe34d8fb-5b40-4191-8015-acb5ed8ea562","Type":"ContainerStarted","Data":"8efc4af9dd622d1354fc662e13203da3d0869a4c858a281e7d1e57f6f51500a6"} Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.183724 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.653574 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.657448 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.660448 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-fsmcq" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.660662 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.661511 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.661978 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.666173 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.669609 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.714518 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9005be17-9874-4f4f-bd91-39b3c74314ec-kolla-config\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.714887 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.714948 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9005be17-9874-4f4f-bd91-39b3c74314ec-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.714971 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwccz\" (UniqueName: \"kubernetes.io/projected/9005be17-9874-4f4f-bd91-39b3c74314ec-kube-api-access-xwccz\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.715058 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9005be17-9874-4f4f-bd91-39b3c74314ec-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.715099 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9005be17-9874-4f4f-bd91-39b3c74314ec-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.715126 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9005be17-9874-4f4f-bd91-39b3c74314ec-config-data-default\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.715156 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9005be17-9874-4f4f-bd91-39b3c74314ec-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.816742 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9005be17-9874-4f4f-bd91-39b3c74314ec-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.817038 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9005be17-9874-4f4f-bd91-39b3c74314ec-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.817091 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9005be17-9874-4f4f-bd91-39b3c74314ec-config-data-default\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.817157 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9005be17-9874-4f4f-bd91-39b3c74314ec-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.817305 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9005be17-9874-4f4f-bd91-39b3c74314ec-kolla-config\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.817367 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.817406 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9005be17-9874-4f4f-bd91-39b3c74314ec-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.817430 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwccz\" (UniqueName: \"kubernetes.io/projected/9005be17-9874-4f4f-bd91-39b3c74314ec-kube-api-access-xwccz\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.818034 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.820367 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9005be17-9874-4f4f-bd91-39b3c74314ec-kolla-config\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.820535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9005be17-9874-4f4f-bd91-39b3c74314ec-config-data-default\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.821036 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9005be17-9874-4f4f-bd91-39b3c74314ec-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.821348 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9005be17-9874-4f4f-bd91-39b3c74314ec-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.837668 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwccz\" (UniqueName: \"kubernetes.io/projected/9005be17-9874-4f4f-bd91-39b3c74314ec-kube-api-access-xwccz\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.842464 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9005be17-9874-4f4f-bd91-39b3c74314ec-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.850513 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9005be17-9874-4f4f-bd91-39b3c74314ec-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:15 crc kubenswrapper[4813]: I1125 10:50:15.852513 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"9005be17-9874-4f4f-bd91-39b3c74314ec\") " pod="openstack/openstack-galera-0" Nov 25 10:50:16 crc kubenswrapper[4813]: I1125 10:50:16.016744 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.119255 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.121284 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.126426 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.126668 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tx99p" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.126927 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.127093 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.135443 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.153024 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0444b7b3-af36-4fca-80c6-8348adc42a58-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.153230 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cd65\" (UniqueName: \"kubernetes.io/projected/0444b7b3-af36-4fca-80c6-8348adc42a58-kube-api-access-9cd65\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.153458 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0444b7b3-af36-4fca-80c6-8348adc42a58-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.153517 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.153785 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0444b7b3-af36-4fca-80c6-8348adc42a58-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.153819 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0444b7b3-af36-4fca-80c6-8348adc42a58-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.153867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0444b7b3-af36-4fca-80c6-8348adc42a58-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.153917 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0444b7b3-af36-4fca-80c6-8348adc42a58-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.256577 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0444b7b3-af36-4fca-80c6-8348adc42a58-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.257060 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0444b7b3-af36-4fca-80c6-8348adc42a58-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.257111 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cd65\" (UniqueName: \"kubernetes.io/projected/0444b7b3-af36-4fca-80c6-8348adc42a58-kube-api-access-9cd65\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.257171 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0444b7b3-af36-4fca-80c6-8348adc42a58-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.257216 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.257262 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0444b7b3-af36-4fca-80c6-8348adc42a58-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.257288 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0444b7b3-af36-4fca-80c6-8348adc42a58-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.257329 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0444b7b3-af36-4fca-80c6-8348adc42a58-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.257803 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0444b7b3-af36-4fca-80c6-8348adc42a58-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.258368 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.259455 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0444b7b3-af36-4fca-80c6-8348adc42a58-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.260074 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0444b7b3-af36-4fca-80c6-8348adc42a58-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.260902 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0444b7b3-af36-4fca-80c6-8348adc42a58-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.271184 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.273922 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.278317 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.278915 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0444b7b3-af36-4fca-80c6-8348adc42a58-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.280151 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0444b7b3-af36-4fca-80c6-8348adc42a58-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.284058 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-p9x2p" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.284271 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.285962 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cd65\" (UniqueName: \"kubernetes.io/projected/0444b7b3-af36-4fca-80c6-8348adc42a58-kube-api-access-9cd65\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.294949 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.326745 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0444b7b3-af36-4fca-80c6-8348adc42a58\") " pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.358458 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/11b88009-8577-4264-afbf-8aee9bfc90f8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.358546 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b88009-8577-4264-afbf-8aee9bfc90f8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.358641 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11b88009-8577-4264-afbf-8aee9bfc90f8-kolla-config\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.358692 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vz7l\" (UniqueName: \"kubernetes.io/projected/11b88009-8577-4264-afbf-8aee9bfc90f8-kube-api-access-8vz7l\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.358727 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11b88009-8577-4264-afbf-8aee9bfc90f8-config-data\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.459859 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/11b88009-8577-4264-afbf-8aee9bfc90f8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.459927 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b88009-8577-4264-afbf-8aee9bfc90f8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.459986 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11b88009-8577-4264-afbf-8aee9bfc90f8-kolla-config\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.460026 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vz7l\" (UniqueName: \"kubernetes.io/projected/11b88009-8577-4264-afbf-8aee9bfc90f8-kube-api-access-8vz7l\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.460054 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11b88009-8577-4264-afbf-8aee9bfc90f8-config-data\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.460967 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11b88009-8577-4264-afbf-8aee9bfc90f8-config-data\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.461886 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11b88009-8577-4264-afbf-8aee9bfc90f8-kolla-config\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.465127 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.467340 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/11b88009-8577-4264-afbf-8aee9bfc90f8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.486411 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b88009-8577-4264-afbf-8aee9bfc90f8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.509879 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vz7l\" (UniqueName: \"kubernetes.io/projected/11b88009-8577-4264-afbf-8aee9bfc90f8-kube-api-access-8vz7l\") pod \"memcached-0\" (UID: \"11b88009-8577-4264-afbf-8aee9bfc90f8\") " pod="openstack/memcached-0" Nov 25 10:50:17 crc kubenswrapper[4813]: I1125 10:50:17.721795 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 10:50:18 crc kubenswrapper[4813]: I1125 10:50:18.903173 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 10:50:18 crc kubenswrapper[4813]: I1125 10:50:18.904506 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 10:50:18 crc kubenswrapper[4813]: I1125 10:50:18.908643 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jqtfl" Nov 25 10:50:18 crc kubenswrapper[4813]: I1125 10:50:18.913916 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 10:50:18 crc kubenswrapper[4813]: I1125 10:50:18.983850 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvjg\" (UniqueName: \"kubernetes.io/projected/e9030c35-b810-4f59-b1e6-5daec39fcc6d-kube-api-access-vgvjg\") pod \"kube-state-metrics-0\" (UID: \"e9030c35-b810-4f59-b1e6-5daec39fcc6d\") " pod="openstack/kube-state-metrics-0" Nov 25 10:50:19 crc kubenswrapper[4813]: I1125 10:50:19.086128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgvjg\" (UniqueName: \"kubernetes.io/projected/e9030c35-b810-4f59-b1e6-5daec39fcc6d-kube-api-access-vgvjg\") pod \"kube-state-metrics-0\" (UID: \"e9030c35-b810-4f59-b1e6-5daec39fcc6d\") " pod="openstack/kube-state-metrics-0" Nov 25 10:50:19 crc kubenswrapper[4813]: I1125 10:50:19.110505 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgvjg\" (UniqueName: \"kubernetes.io/projected/e9030c35-b810-4f59-b1e6-5daec39fcc6d-kube-api-access-vgvjg\") pod \"kube-state-metrics-0\" (UID: \"e9030c35-b810-4f59-b1e6-5daec39fcc6d\") " pod="openstack/kube-state-metrics-0" Nov 25 10:50:19 crc kubenswrapper[4813]: I1125 10:50:19.264356 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 10:50:20 crc kubenswrapper[4813]: W1125 10:50:20.020876 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf91d2ed_6d43_49b1_8010_1f59f38aea76.slice/crio-ae9c8e49f8a052724361ff9c217fcf1a23290285d701be18b6d0e4c58b7aecc9 WatchSource:0}: Error finding container ae9c8e49f8a052724361ff9c217fcf1a23290285d701be18b6d0e4c58b7aecc9: Status 404 returned error can't find the container with id ae9c8e49f8a052724361ff9c217fcf1a23290285d701be18b6d0e4c58b7aecc9 Nov 25 10:50:20 crc kubenswrapper[4813]: I1125 10:50:20.112945 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf91d2ed-6d43-49b1-8010-1f59f38aea76","Type":"ContainerStarted","Data":"ae9c8e49f8a052724361ff9c217fcf1a23290285d701be18b6d0e4c58b7aecc9"} Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.227126 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qjpvf"] Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.229079 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.234140 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.234390 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.234544 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-tnzzc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.235141 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qjpvf"] Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.240966 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kzv7f"] Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.254587 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.254421 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kzv7f"] Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311130 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-var-run\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311201 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-scripts\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311233 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md975\" (UniqueName: \"kubernetes.io/projected/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-kube-api-access-md975\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311272 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-etc-ovs\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311306 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-var-run-ovn\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311331 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-var-lib\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311351 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-combined-ca-bundle\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311383 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-var-log-ovn\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311404 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-var-log\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311422 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-var-run\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311455 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-ovn-controller-tls-certs\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311472 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-scripts\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.311503 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdt8d\" (UniqueName: \"kubernetes.io/projected/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-kube-api-access-kdt8d\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.401484 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.403032 4813 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.403077 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.403196 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.403514 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e" gracePeriod=15 Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.403566 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910" gracePeriod=15 Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.403604 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c" gracePeriod=15 Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.403753 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f" gracePeriod=15 Nov 25 10:50:24 crc kubenswrapper[4813]: E1125 10:50:24.404219 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404248 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 10:50:24 crc kubenswrapper[4813]: E1125 10:50:24.404269 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404276 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 10:50:24 crc kubenswrapper[4813]: E1125 10:50:24.404295 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404302 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 10:50:24 crc kubenswrapper[4813]: E1125 10:50:24.404318 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404323 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 10:50:24 crc kubenswrapper[4813]: E1125 10:50:24.404343 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404349 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 10:50:24 crc kubenswrapper[4813]: E1125 10:50:24.404365 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404372 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 10:50:24 crc kubenswrapper[4813]: E1125 10:50:24.404392 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404398 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404542 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404556 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404566 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404575 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404582 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.404589 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.406411 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.403595 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32" gracePeriod=15 Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.416575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-ovn-controller-tls-certs\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.416642 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-scripts\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.416706 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdt8d\" (UniqueName: \"kubernetes.io/projected/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-kube-api-access-kdt8d\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.416751 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-var-run\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.416799 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-scripts\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.416830 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md975\" (UniqueName: \"kubernetes.io/projected/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-kube-api-access-md975\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.416853 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-etc-ovs\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.416884 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-var-run-ovn\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.416912 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-var-lib\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.416933 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-combined-ca-bundle\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.416969 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-var-log-ovn\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.416996 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-var-log\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.417020 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-var-run\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.417906 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-var-run\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.418550 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-var-lib\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.418949 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-var-run-ovn\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.418979 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-var-log-ovn\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.419070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-var-log\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.419440 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-etc-ovs\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.419490 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-var-run\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.421828 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-scripts\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.429717 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-scripts\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.433148 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-combined-ca-bundle\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.437407 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-ovn-controller-tls-certs\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.437785 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdt8d\" (UniqueName: \"kubernetes.io/projected/9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d-kube-api-access-kdt8d\") pod \"ovn-controller-ovs-kzv7f\" (UID: \"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d\") " pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.464073 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md975\" (UniqueName: \"kubernetes.io/projected/da545e4e-8f60-4fb5-93e8-d9e9014c3c74-kube-api-access-md975\") pod \"ovn-controller-qjpvf\" (UID: \"da545e4e-8f60-4fb5-93e8-d9e9014c3c74\") " pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: E1125 10:50:24.501053 4813 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.91:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.518945 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.519002 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.519028 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.519057 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.519210 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.519476 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.519620 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.519730 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.557373 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.584198 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.621329 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.621475 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.621536 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.621579 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.621615 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.621645 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.621671 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.621716 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.621804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.621852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.622012 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.622091 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.622121 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.622144 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.622171 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.622197 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:24 crc kubenswrapper[4813]: I1125 10:50:24.802702 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:25 crc kubenswrapper[4813]: I1125 10:50:25.149799 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 10:50:25 crc kubenswrapper[4813]: I1125 10:50:25.152235 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 10:50:25 crc kubenswrapper[4813]: I1125 10:50:25.153208 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32" exitCode=0 Nov 25 10:50:25 crc kubenswrapper[4813]: I1125 10:50:25.153238 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910" exitCode=0 Nov 25 10:50:25 crc kubenswrapper[4813]: I1125 10:50:25.153249 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c" exitCode=0 Nov 25 10:50:25 crc kubenswrapper[4813]: I1125 10:50:25.153258 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f" exitCode=2 Nov 25 10:50:25 crc kubenswrapper[4813]: I1125 10:50:25.153329 4813 scope.go:117] "RemoveContainer" containerID="e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643" Nov 25 10:50:25 crc kubenswrapper[4813]: I1125 10:50:25.155658 4813 generic.go:334] "Generic (PLEG): container finished" podID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" containerID="e997765f737a2fde8118b784a45edcef8e97712647cf86833d19264a8150d1c3" exitCode=0 Nov 25 10:50:25 crc kubenswrapper[4813]: I1125 10:50:25.155792 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2c3ebcfb-71d9-4d57-824a-b6468b15791e","Type":"ContainerDied","Data":"e997765f737a2fde8118b784a45edcef8e97712647cf86833d19264a8150d1c3"} Nov 25 10:50:25 crc kubenswrapper[4813]: I1125 10:50:25.156340 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:28 crc kubenswrapper[4813]: I1125 10:50:28.183046 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 10:50:28 crc kubenswrapper[4813]: I1125 10:50:28.184221 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e" exitCode=0 Nov 25 10:50:31 crc kubenswrapper[4813]: E1125 10:50:31.783084 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:31 crc kubenswrapper[4813]: E1125 10:50:31.783794 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:31 crc kubenswrapper[4813]: E1125 10:50:31.784327 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:31 crc kubenswrapper[4813]: E1125 10:50:31.784811 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:31 crc kubenswrapper[4813]: E1125 10:50:31.785161 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:31 crc kubenswrapper[4813]: I1125 10:50:31.785204 4813 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 25 10:50:31 crc kubenswrapper[4813]: E1125 10:50:31.785518 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" interval="200ms" Nov 25 10:50:31 crc kubenswrapper[4813]: E1125 10:50:31.987110 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" interval="400ms" Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.147338 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.148198 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.226105 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2c3ebcfb-71d9-4d57-824a-b6468b15791e","Type":"ContainerDied","Data":"c6384e09d9250afe7588a52052612bb78f193e4cfbf325d504522d3f5ec80a63"} Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.226181 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6384e09d9250afe7588a52052612bb78f193e4cfbf325d504522d3f5ec80a63" Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.226202 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.264551 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c3ebcfb-71d9-4d57-824a-b6468b15791e-kubelet-dir\") pod \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\" (UID: \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\") " Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.264695 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c3ebcfb-71d9-4d57-824a-b6468b15791e-var-lock\") pod \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\" (UID: \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\") " Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.264792 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c3ebcfb-71d9-4d57-824a-b6468b15791e-var-lock" (OuterVolumeSpecName: "var-lock") pod "2c3ebcfb-71d9-4d57-824a-b6468b15791e" (UID: "2c3ebcfb-71d9-4d57-824a-b6468b15791e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.264852 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c3ebcfb-71d9-4d57-824a-b6468b15791e-kube-api-access\") pod \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\" (UID: \"2c3ebcfb-71d9-4d57-824a-b6468b15791e\") " Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.264860 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c3ebcfb-71d9-4d57-824a-b6468b15791e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2c3ebcfb-71d9-4d57-824a-b6468b15791e" (UID: "2c3ebcfb-71d9-4d57-824a-b6468b15791e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.265347 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c3ebcfb-71d9-4d57-824a-b6468b15791e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.265377 4813 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c3ebcfb-71d9-4d57-824a-b6468b15791e-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.270442 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3ebcfb-71d9-4d57-824a-b6468b15791e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2c3ebcfb-71d9-4d57-824a-b6468b15791e" (UID: "2c3ebcfb-71d9-4d57-824a-b6468b15791e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.368632 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c3ebcfb-71d9-4d57-824a-b6468b15791e-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 10:50:32 crc kubenswrapper[4813]: E1125 10:50:32.387880 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" interval="800ms" Nov 25 10:50:32 crc kubenswrapper[4813]: I1125 10:50:32.551918 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:33 crc kubenswrapper[4813]: E1125 10:50:33.188609 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" interval="1.6s" Nov 25 10:50:33 crc kubenswrapper[4813]: I1125 10:50:33.625149 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:34 crc kubenswrapper[4813]: E1125 10:50:34.790559 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" interval="3.2s" Nov 25 10:50:35 crc kubenswrapper[4813]: I1125 10:50:35.253157 4813 generic.go:334] "Generic (PLEG): container finished" podID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" containerID="0bea679701fb92dd51b86000dddec84983c7baac6e6090c8a3567ede6024ce13" exitCode=1 Nov 25 10:50:35 crc kubenswrapper[4813]: I1125 10:50:35.253194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" event={"ID":"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b","Type":"ContainerDied","Data":"0bea679701fb92dd51b86000dddec84983c7baac6e6090c8a3567ede6024ce13"} Nov 25 10:50:35 crc kubenswrapper[4813]: I1125 10:50:35.253913 4813 scope.go:117] "RemoveContainer" containerID="0bea679701fb92dd51b86000dddec84983c7baac6e6090c8a3567ede6024ce13" Nov 25 10:50:35 crc kubenswrapper[4813]: I1125 10:50:35.254230 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:35 crc kubenswrapper[4813]: I1125 10:50:35.254543 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:36 crc kubenswrapper[4813]: E1125 10:50:36.898198 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/events\": dial tcp 38.129.56.91:6443: connect: connection refused" event="&Event{ObjectMeta:{metallb-operator-controller-manager-6b84b955f5-mmrm7.187b3a59e9fb49b9 metallb-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:metallb-system,Name:metallb-operator-controller-manager-6b84b955f5-mmrm7,UID:a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b,APIVersion:v1,ResourceVersion:32299,FieldPath:spec.containers{manager},},Reason:Pulled,Message:Container image \"registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:3d0ea2e1939176fd381d97016be6a158700d3c01a2d116e6d7887d6fb3e33ddd\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 10:50:36.897741241 +0000 UTC m=+1134.027451127,LastTimestamp:2025-11-25 10:50:36.897741241 +0000 UTC m=+1134.027451127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.046490 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.047782 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.048436 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.049002 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.049344 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.163072 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.163447 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.163722 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.163202 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.164064 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.164176 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.265553 4813 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.265588 4813 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.265597 4813 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.268373 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.268419 4813 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be" exitCode=1 Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.268476 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be"} Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.269246 4813 scope.go:117] "RemoveContainer" containerID="bf09669b247e0daa0787d296aa833570e1a542082a7a698bb499dc34f16fa4be" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.269938 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.270133 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.270397 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.270757 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.273653 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.274520 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.288448 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.288983 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.289293 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.289570 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.487669 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.634250 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 25 10:50:37 crc kubenswrapper[4813]: I1125 10:50:37.880448 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:50:37 crc kubenswrapper[4813]: E1125 10:50:37.992666 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" interval="6.4s" Nov 25 10:50:39 crc kubenswrapper[4813]: I1125 10:50:39.621049 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:39 crc kubenswrapper[4813]: I1125 10:50:39.622518 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:39 crc kubenswrapper[4813]: I1125 10:50:39.622942 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:39 crc kubenswrapper[4813]: I1125 10:50:39.623318 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:39 crc kubenswrapper[4813]: I1125 10:50:39.641599 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:39 crc kubenswrapper[4813]: I1125 10:50:39.641629 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:39 crc kubenswrapper[4813]: E1125 10:50:39.642251 4813 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:39 crc kubenswrapper[4813]: I1125 10:50:39.643103 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:40 crc kubenswrapper[4813]: I1125 10:50:40.509276 4813 scope.go:117] "RemoveContainer" containerID="7cbb3888ff07d07784e188a0b7b49e0f5b421cfaeb61924a0a46094fb3795b32" Nov 25 10:50:40 crc kubenswrapper[4813]: E1125 10:50:40.613498 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/events\": dial tcp 38.129.56.91:6443: connect: connection refused" event="&Event{ObjectMeta:{metallb-operator-controller-manager-6b84b955f5-mmrm7.187b3a59e9fb49b9 metallb-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:metallb-system,Name:metallb-operator-controller-manager-6b84b955f5-mmrm7,UID:a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b,APIVersion:v1,ResourceVersion:32299,FieldPath:spec.containers{manager},},Reason:Pulled,Message:Container image \"registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:3d0ea2e1939176fd381d97016be6a158700d3c01a2d116e6d7887d6fb3e33ddd\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 10:50:36.897741241 +0000 UTC m=+1134.027451127,LastTimestamp:2025-11-25 10:50:36.897741241 +0000 UTC m=+1134.027451127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 10:50:40 crc kubenswrapper[4813]: I1125 10:50:40.624819 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:50:40 crc kubenswrapper[4813]: E1125 10:50:40.979427 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 10:50:40 crc kubenswrapper[4813]: E1125 10:50:40.979610 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxcrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-wn82k_openstack(69f8a703-848a-4de9-a102-81426dcd6c3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:50:40 crc kubenswrapper[4813]: E1125 10:50:40.980837 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.202463 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.203062 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkv86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-s5ghf_openstack(62da3927-ddca-4922-8e9b-c96d06c44c31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.204247 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.296760 4813 scope.go:117] "RemoveContainer" containerID="e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.299231 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\": container with ID starting with e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643 not found: ID does not exist" containerID="e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.299282 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643"} err="failed to get container status \"e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\": rpc error: code = NotFound desc = could not find container \"e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643\": container with ID starting with e393f04b541e0fc8c686b42396605529aa65fdaaf6602dd7c64a322a5071d643 not found: ID does not exist" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.299335 4813 scope.go:117] "RemoveContainer" containerID="46e1b456988c700012c86fac792b65d2e7c9a049057d5a17efbf600418191910" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.313647 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a56bcf7e4ba83c39a15ecd4e588119a50437cd6730ba447f26b522d84881a2c6"} Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.316339 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.317605 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.317857 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.318060 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.318251 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.318498 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.318766 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.318952 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.319122 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.320006 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.320337 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.324123 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:50:41 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_memcached-0_openstack_11b88009-8577-4264-afbf-8aee9bfc90f8_0(994f3a9c54ade11accaf6483a061285791226c44f5493bed4ad49de7399b9962): error adding pod openstack_memcached-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"994f3a9c54ade11accaf6483a061285791226c44f5493bed4ad49de7399b9962" Netns:"/var/run/netns/592223b6-e28a-4b2c-94f4-774b31eb53b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=memcached-0;K8S_POD_INFRA_CONTAINER_ID=994f3a9c54ade11accaf6483a061285791226c44f5493bed4ad49de7399b9962;K8S_POD_UID=11b88009-8577-4264-afbf-8aee9bfc90f8" Path:"" ERRORED: error configuring pod [openstack/memcached-0] networking: Multus: [openstack/memcached-0/11b88009-8577-4264-afbf-8aee9bfc90f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod memcached-0 in out of cluster comm: SetNetworkStatus: failed to update the pod memcached-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/memcached-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:41 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:41 crc kubenswrapper[4813]: > Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.324186 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:50:41 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_memcached-0_openstack_11b88009-8577-4264-afbf-8aee9bfc90f8_0(994f3a9c54ade11accaf6483a061285791226c44f5493bed4ad49de7399b9962): error adding pod openstack_memcached-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"994f3a9c54ade11accaf6483a061285791226c44f5493bed4ad49de7399b9962" Netns:"/var/run/netns/592223b6-e28a-4b2c-94f4-774b31eb53b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=memcached-0;K8S_POD_INFRA_CONTAINER_ID=994f3a9c54ade11accaf6483a061285791226c44f5493bed4ad49de7399b9962;K8S_POD_UID=11b88009-8577-4264-afbf-8aee9bfc90f8" Path:"" ERRORED: error configuring pod [openstack/memcached-0] networking: Multus: [openstack/memcached-0/11b88009-8577-4264-afbf-8aee9bfc90f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod memcached-0 in out of cluster comm: SetNetworkStatus: failed to update the pod memcached-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/memcached-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:41 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:41 crc kubenswrapper[4813]: > pod="openstack/memcached-0" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.324210 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:50:41 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_memcached-0_openstack_11b88009-8577-4264-afbf-8aee9bfc90f8_0(994f3a9c54ade11accaf6483a061285791226c44f5493bed4ad49de7399b9962): error adding pod openstack_memcached-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"994f3a9c54ade11accaf6483a061285791226c44f5493bed4ad49de7399b9962" Netns:"/var/run/netns/592223b6-e28a-4b2c-94f4-774b31eb53b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=memcached-0;K8S_POD_INFRA_CONTAINER_ID=994f3a9c54ade11accaf6483a061285791226c44f5493bed4ad49de7399b9962;K8S_POD_UID=11b88009-8577-4264-afbf-8aee9bfc90f8" Path:"" ERRORED: error configuring pod [openstack/memcached-0] networking: Multus: [openstack/memcached-0/11b88009-8577-4264-afbf-8aee9bfc90f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod memcached-0 in out of cluster comm: SetNetworkStatus: failed to update the pod memcached-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/memcached-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:41 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:41 crc kubenswrapper[4813]: > pod="openstack/memcached-0" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.324278 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"memcached-0_openstack(11b88009-8577-4264-afbf-8aee9bfc90f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"memcached-0_openstack(11b88009-8577-4264-afbf-8aee9bfc90f8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_memcached-0_openstack_11b88009-8577-4264-afbf-8aee9bfc90f8_0(994f3a9c54ade11accaf6483a061285791226c44f5493bed4ad49de7399b9962): error adding pod openstack_memcached-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"994f3a9c54ade11accaf6483a061285791226c44f5493bed4ad49de7399b9962\\\" Netns:\\\"/var/run/netns/592223b6-e28a-4b2c-94f4-774b31eb53b6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=memcached-0;K8S_POD_INFRA_CONTAINER_ID=994f3a9c54ade11accaf6483a061285791226c44f5493bed4ad49de7399b9962;K8S_POD_UID=11b88009-8577-4264-afbf-8aee9bfc90f8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/memcached-0] networking: Multus: [openstack/memcached-0/11b88009-8577-4264-afbf-8aee9bfc90f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod memcached-0 in out of cluster comm: SetNetworkStatus: failed to update the pod memcached-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/memcached-0?timeout=1m0s\\\": dial tcp 38.129.56.91:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/memcached-0" podUID="11b88009-8577-4264-afbf-8aee9bfc90f8" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.340185 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:50:41 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-galera-0_openstack_9005be17-9874-4f4f-bd91-39b3c74314ec_0(2ba4c0c40cd16d3eaebb1eda4449e7bb1506893307f6a0dae93e7a034b0b6c7a): error adding pod openstack_openstack-galera-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2ba4c0c40cd16d3eaebb1eda4449e7bb1506893307f6a0dae93e7a034b0b6c7a" Netns:"/var/run/netns/e7c22ad6-d356-4b80-b853-041f347495b4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-galera-0;K8S_POD_INFRA_CONTAINER_ID=2ba4c0c40cd16d3eaebb1eda4449e7bb1506893307f6a0dae93e7a034b0b6c7a;K8S_POD_UID=9005be17-9874-4f4f-bd91-39b3c74314ec" Path:"" ERRORED: error configuring pod [openstack/openstack-galera-0] networking: Multus: [openstack/openstack-galera-0/9005be17-9874-4f4f-bd91-39b3c74314ec]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-galera-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:41 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:41 crc kubenswrapper[4813]: > Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.340263 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:50:41 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-galera-0_openstack_9005be17-9874-4f4f-bd91-39b3c74314ec_0(2ba4c0c40cd16d3eaebb1eda4449e7bb1506893307f6a0dae93e7a034b0b6c7a): error adding pod openstack_openstack-galera-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2ba4c0c40cd16d3eaebb1eda4449e7bb1506893307f6a0dae93e7a034b0b6c7a" Netns:"/var/run/netns/e7c22ad6-d356-4b80-b853-041f347495b4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-galera-0;K8S_POD_INFRA_CONTAINER_ID=2ba4c0c40cd16d3eaebb1eda4449e7bb1506893307f6a0dae93e7a034b0b6c7a;K8S_POD_UID=9005be17-9874-4f4f-bd91-39b3c74314ec" Path:"" ERRORED: error configuring pod [openstack/openstack-galera-0] networking: Multus: [openstack/openstack-galera-0/9005be17-9874-4f4f-bd91-39b3c74314ec]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-galera-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:41 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:41 crc kubenswrapper[4813]: > pod="openstack/openstack-galera-0" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.340285 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:50:41 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-galera-0_openstack_9005be17-9874-4f4f-bd91-39b3c74314ec_0(2ba4c0c40cd16d3eaebb1eda4449e7bb1506893307f6a0dae93e7a034b0b6c7a): error adding pod openstack_openstack-galera-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2ba4c0c40cd16d3eaebb1eda4449e7bb1506893307f6a0dae93e7a034b0b6c7a" Netns:"/var/run/netns/e7c22ad6-d356-4b80-b853-041f347495b4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-galera-0;K8S_POD_INFRA_CONTAINER_ID=2ba4c0c40cd16d3eaebb1eda4449e7bb1506893307f6a0dae93e7a034b0b6c7a;K8S_POD_UID=9005be17-9874-4f4f-bd91-39b3c74314ec" Path:"" ERRORED: error configuring pod [openstack/openstack-galera-0] networking: Multus: [openstack/openstack-galera-0/9005be17-9874-4f4f-bd91-39b3c74314ec]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-galera-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:41 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:41 crc kubenswrapper[4813]: > pod="openstack/openstack-galera-0" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.340353 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-galera-0_openstack_9005be17-9874-4f4f-bd91-39b3c74314ec_0(2ba4c0c40cd16d3eaebb1eda4449e7bb1506893307f6a0dae93e7a034b0b6c7a): error adding pod openstack_openstack-galera-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2ba4c0c40cd16d3eaebb1eda4449e7bb1506893307f6a0dae93e7a034b0b6c7a\\\" Netns:\\\"/var/run/netns/e7c22ad6-d356-4b80-b853-041f347495b4\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-galera-0;K8S_POD_INFRA_CONTAINER_ID=2ba4c0c40cd16d3eaebb1eda4449e7bb1506893307f6a0dae93e7a034b0b6c7a;K8S_POD_UID=9005be17-9874-4f4f-bd91-39b3c74314ec\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/openstack-galera-0] networking: Multus: [openstack/openstack-galera-0/9005be17-9874-4f4f-bd91-39b3c74314ec]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-galera-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0?timeout=1m0s\\\": dial tcp 38.129.56.91:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.366516 4813 scope.go:117] "RemoveContainer" containerID="f80f2017cddd8c12997b1818074df5aa37a902dca43c4b60dda58080e1887f8c" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.461116 4813 scope.go:117] "RemoveContainer" containerID="f225dc69c294a0063eda858d71902e848fb59d4595c25bfeecdf8dfb60fdcd6f" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.474586 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:50:41 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-cell1-galera-0_openstack_0444b7b3-af36-4fca-80c6-8348adc42a58_0(61634f2fdb7f104e71b4e10ae814845e70d1a013c18abfb152a6ba363fa5c76f): error adding pod openstack_openstack-cell1-galera-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"61634f2fdb7f104e71b4e10ae814845e70d1a013c18abfb152a6ba363fa5c76f" Netns:"/var/run/netns/2202dee9-c8e2-4f99-842e-56a7a2b54185" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-cell1-galera-0;K8S_POD_INFRA_CONTAINER_ID=61634f2fdb7f104e71b4e10ae814845e70d1a013c18abfb152a6ba363fa5c76f;K8S_POD_UID=0444b7b3-af36-4fca-80c6-8348adc42a58" Path:"" ERRORED: error configuring pod [openstack/openstack-cell1-galera-0] networking: Multus: [openstack/openstack-cell1-galera-0/0444b7b3-af36-4fca-80c6-8348adc42a58]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:41 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:41 crc kubenswrapper[4813]: > Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.474958 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:50:41 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-cell1-galera-0_openstack_0444b7b3-af36-4fca-80c6-8348adc42a58_0(61634f2fdb7f104e71b4e10ae814845e70d1a013c18abfb152a6ba363fa5c76f): error adding pod openstack_openstack-cell1-galera-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"61634f2fdb7f104e71b4e10ae814845e70d1a013c18abfb152a6ba363fa5c76f" Netns:"/var/run/netns/2202dee9-c8e2-4f99-842e-56a7a2b54185" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-cell1-galera-0;K8S_POD_INFRA_CONTAINER_ID=61634f2fdb7f104e71b4e10ae814845e70d1a013c18abfb152a6ba363fa5c76f;K8S_POD_UID=0444b7b3-af36-4fca-80c6-8348adc42a58" Path:"" ERRORED: error configuring pod [openstack/openstack-cell1-galera-0] networking: Multus: [openstack/openstack-cell1-galera-0/0444b7b3-af36-4fca-80c6-8348adc42a58]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:41 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:41 crc kubenswrapper[4813]: > pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.474977 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:50:41 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-cell1-galera-0_openstack_0444b7b3-af36-4fca-80c6-8348adc42a58_0(61634f2fdb7f104e71b4e10ae814845e70d1a013c18abfb152a6ba363fa5c76f): error adding pod openstack_openstack-cell1-galera-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"61634f2fdb7f104e71b4e10ae814845e70d1a013c18abfb152a6ba363fa5c76f" Netns:"/var/run/netns/2202dee9-c8e2-4f99-842e-56a7a2b54185" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-cell1-galera-0;K8S_POD_INFRA_CONTAINER_ID=61634f2fdb7f104e71b4e10ae814845e70d1a013c18abfb152a6ba363fa5c76f;K8S_POD_UID=0444b7b3-af36-4fca-80c6-8348adc42a58" Path:"" ERRORED: error configuring pod [openstack/openstack-cell1-galera-0] networking: Multus: [openstack/openstack-cell1-galera-0/0444b7b3-af36-4fca-80c6-8348adc42a58]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:41 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:41 crc kubenswrapper[4813]: > pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.475482 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-cell1-galera-0_openstack_0444b7b3-af36-4fca-80c6-8348adc42a58_0(61634f2fdb7f104e71b4e10ae814845e70d1a013c18abfb152a6ba363fa5c76f): error adding pod openstack_openstack-cell1-galera-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"61634f2fdb7f104e71b4e10ae814845e70d1a013c18abfb152a6ba363fa5c76f\\\" Netns:\\\"/var/run/netns/2202dee9-c8e2-4f99-842e-56a7a2b54185\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-cell1-galera-0;K8S_POD_INFRA_CONTAINER_ID=61634f2fdb7f104e71b4e10ae814845e70d1a013c18abfb152a6ba363fa5c76f;K8S_POD_UID=0444b7b3-af36-4fca-80c6-8348adc42a58\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/openstack-cell1-galera-0] networking: Multus: [openstack/openstack-cell1-galera-0/0444b7b3-af36-4fca-80c6-8348adc42a58]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0?timeout=1m0s\\\": dial tcp 38.129.56.91:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.506874 4813 scope.go:117] "RemoveContainer" containerID="cf4d6feac8fd516ce2d5e2ec13519c2bbd2d152cffe7c434fe2c4b478e8c9a7e" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.602781 4813 scope.go:117] "RemoveContainer" containerID="f3af1cb4a9a556e116874a9b7f7cb75a99db83b991b65280b6b2a96233ca0a85" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.664733 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:50:41 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-state-metrics-0_openstack_e9030c35-b810-4f59-b1e6-5daec39fcc6d_0(7f58e66d3697ff0280ab662d103cdc70e06e55bae660d3b11552edb278e1fd47): error adding pod openstack_kube-state-metrics-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7f58e66d3697ff0280ab662d103cdc70e06e55bae660d3b11552edb278e1fd47" Netns:"/var/run/netns/ad7b67e4-6146-46fb-a3cd-719e7deeefef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=kube-state-metrics-0;K8S_POD_INFRA_CONTAINER_ID=7f58e66d3697ff0280ab662d103cdc70e06e55bae660d3b11552edb278e1fd47;K8S_POD_UID=e9030c35-b810-4f59-b1e6-5daec39fcc6d" Path:"" ERRORED: error configuring pod [openstack/kube-state-metrics-0] networking: Multus: [openstack/kube-state-metrics-0/e9030c35-b810-4f59-b1e6-5daec39fcc6d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod kube-state-metrics-0 in out of cluster comm: SetNetworkStatus: failed to update the pod kube-state-metrics-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:41 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:41 crc kubenswrapper[4813]: > Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.664806 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:50:41 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-state-metrics-0_openstack_e9030c35-b810-4f59-b1e6-5daec39fcc6d_0(7f58e66d3697ff0280ab662d103cdc70e06e55bae660d3b11552edb278e1fd47): error adding pod openstack_kube-state-metrics-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7f58e66d3697ff0280ab662d103cdc70e06e55bae660d3b11552edb278e1fd47" Netns:"/var/run/netns/ad7b67e4-6146-46fb-a3cd-719e7deeefef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=kube-state-metrics-0;K8S_POD_INFRA_CONTAINER_ID=7f58e66d3697ff0280ab662d103cdc70e06e55bae660d3b11552edb278e1fd47;K8S_POD_UID=e9030c35-b810-4f59-b1e6-5daec39fcc6d" Path:"" ERRORED: error configuring pod [openstack/kube-state-metrics-0] networking: Multus: [openstack/kube-state-metrics-0/e9030c35-b810-4f59-b1e6-5daec39fcc6d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod kube-state-metrics-0 in out of cluster comm: SetNetworkStatus: failed to update the pod kube-state-metrics-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:41 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:41 crc kubenswrapper[4813]: > pod="openstack/kube-state-metrics-0" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.664850 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:50:41 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-state-metrics-0_openstack_e9030c35-b810-4f59-b1e6-5daec39fcc6d_0(7f58e66d3697ff0280ab662d103cdc70e06e55bae660d3b11552edb278e1fd47): error adding pod openstack_kube-state-metrics-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7f58e66d3697ff0280ab662d103cdc70e06e55bae660d3b11552edb278e1fd47" Netns:"/var/run/netns/ad7b67e4-6146-46fb-a3cd-719e7deeefef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=kube-state-metrics-0;K8S_POD_INFRA_CONTAINER_ID=7f58e66d3697ff0280ab662d103cdc70e06e55bae660d3b11552edb278e1fd47;K8S_POD_UID=e9030c35-b810-4f59-b1e6-5daec39fcc6d" Path:"" ERRORED: error configuring pod [openstack/kube-state-metrics-0] networking: Multus: [openstack/kube-state-metrics-0/e9030c35-b810-4f59-b1e6-5daec39fcc6d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod kube-state-metrics-0 in out of cluster comm: SetNetworkStatus: failed to update the pod kube-state-metrics-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:41 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:41 crc kubenswrapper[4813]: > pod="openstack/kube-state-metrics-0" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.664904 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-state-metrics-0_openstack(e9030c35-b810-4f59-b1e6-5daec39fcc6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-state-metrics-0_openstack(e9030c35-b810-4f59-b1e6-5daec39fcc6d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-state-metrics-0_openstack_e9030c35-b810-4f59-b1e6-5daec39fcc6d_0(7f58e66d3697ff0280ab662d103cdc70e06e55bae660d3b11552edb278e1fd47): error adding pod openstack_kube-state-metrics-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7f58e66d3697ff0280ab662d103cdc70e06e55bae660d3b11552edb278e1fd47\\\" Netns:\\\"/var/run/netns/ad7b67e4-6146-46fb-a3cd-719e7deeefef\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=kube-state-metrics-0;K8S_POD_INFRA_CONTAINER_ID=7f58e66d3697ff0280ab662d103cdc70e06e55bae660d3b11552edb278e1fd47;K8S_POD_UID=e9030c35-b810-4f59-b1e6-5daec39fcc6d\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/kube-state-metrics-0] networking: Multus: [openstack/kube-state-metrics-0/e9030c35-b810-4f59-b1e6-5daec39fcc6d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod kube-state-metrics-0 in out of cluster comm: SetNetworkStatus: failed to update the pod kube-state-metrics-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0?timeout=1m0s\\\": dial tcp 38.129.56.91:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/kube-state-metrics-0" podUID="e9030c35-b810-4f59-b1e6-5daec39fcc6d" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.792714 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.793343 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.794245 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.794772 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.795023 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.795255 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.961153 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f8a703-848a-4de9-a102-81426dcd6c3a-config\") pod \"69f8a703-848a-4de9-a102-81426dcd6c3a\" (UID: \"69f8a703-848a-4de9-a102-81426dcd6c3a\") " Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.961516 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxcrh\" (UniqueName: \"kubernetes.io/projected/69f8a703-848a-4de9-a102-81426dcd6c3a-kube-api-access-zxcrh\") pod \"69f8a703-848a-4de9-a102-81426dcd6c3a\" (UID: \"69f8a703-848a-4de9-a102-81426dcd6c3a\") " Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.961714 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f8a703-848a-4de9-a102-81426dcd6c3a-config" (OuterVolumeSpecName: "config") pod "69f8a703-848a-4de9-a102-81426dcd6c3a" (UID: "69f8a703-848a-4de9-a102-81426dcd6c3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.962000 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f8a703-848a-4de9-a102-81426dcd6c3a-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:50:41 crc kubenswrapper[4813]: I1125 10:50:41.971253 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f8a703-848a-4de9-a102-81426dcd6c3a-kube-api-access-zxcrh" (OuterVolumeSpecName: "kube-api-access-zxcrh") pod "69f8a703-848a-4de9-a102-81426dcd6c3a" (UID: "69f8a703-848a-4de9-a102-81426dcd6c3a"). InnerVolumeSpecName "kube-api-access-zxcrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.982276 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.982460 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7467,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-hdpgf_openstack(78498723-5c73-4aa4-8480-ef20ce8593ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:50:41 crc kubenswrapper[4813]: E1125 10:50:41.983654 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.063528 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxcrh\" (UniqueName: \"kubernetes.io/projected/69f8a703-848a-4de9-a102-81426dcd6c3a-kube-api-access-zxcrh\") on node \"crc\" DevicePath \"\"" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.080811 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.081458 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.081764 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.082000 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.082281 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.082523 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.165121 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62da3927-ddca-4922-8e9b-c96d06c44c31-dns-svc\") pod \"62da3927-ddca-4922-8e9b-c96d06c44c31\" (UID: \"62da3927-ddca-4922-8e9b-c96d06c44c31\") " Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.165380 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62da3927-ddca-4922-8e9b-c96d06c44c31-config\") pod \"62da3927-ddca-4922-8e9b-c96d06c44c31\" (UID: \"62da3927-ddca-4922-8e9b-c96d06c44c31\") " Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.165600 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkv86\" (UniqueName: \"kubernetes.io/projected/62da3927-ddca-4922-8e9b-c96d06c44c31-kube-api-access-pkv86\") pod \"62da3927-ddca-4922-8e9b-c96d06c44c31\" (UID: \"62da3927-ddca-4922-8e9b-c96d06c44c31\") " Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.165904 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62da3927-ddca-4922-8e9b-c96d06c44c31-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "62da3927-ddca-4922-8e9b-c96d06c44c31" (UID: "62da3927-ddca-4922-8e9b-c96d06c44c31"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.166107 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62da3927-ddca-4922-8e9b-c96d06c44c31-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.166246 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62da3927-ddca-4922-8e9b-c96d06c44c31-config" (OuterVolumeSpecName: "config") pod "62da3927-ddca-4922-8e9b-c96d06c44c31" (UID: "62da3927-ddca-4922-8e9b-c96d06c44c31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.172183 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62da3927-ddca-4922-8e9b-c96d06c44c31-kube-api-access-pkv86" (OuterVolumeSpecName: "kube-api-access-pkv86") pod "62da3927-ddca-4922-8e9b-c96d06c44c31" (UID: "62da3927-ddca-4922-8e9b-c96d06c44c31"). InnerVolumeSpecName "kube-api-access-pkv86". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.173985 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:50:42 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovn-controller-qjpvf_openstack_da545e4e-8f60-4fb5-93e8-d9e9014c3c74_0(52329c0d8f36030852be880478a53d28f55774491e06e6b9dec2fdb7d930f6ab): error adding pod openstack_ovn-controller-qjpvf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"52329c0d8f36030852be880478a53d28f55774491e06e6b9dec2fdb7d930f6ab" Netns:"/var/run/netns/d14a005b-7469-43d5-b6d1-e0de7336a179" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovn-controller-qjpvf;K8S_POD_INFRA_CONTAINER_ID=52329c0d8f36030852be880478a53d28f55774491e06e6b9dec2fdb7d930f6ab;K8S_POD_UID=da545e4e-8f60-4fb5-93e8-d9e9014c3c74" Path:"" ERRORED: error configuring pod [openstack/ovn-controller-qjpvf] networking: Multus: [openstack/ovn-controller-qjpvf/da545e4e-8f60-4fb5-93e8-d9e9014c3c74]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ovn-controller-qjpvf in out of cluster comm: SetNetworkStatus: failed to update the pod ovn-controller-qjpvf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-qjpvf?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:42 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:42 crc kubenswrapper[4813]: > Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.174044 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:50:42 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovn-controller-qjpvf_openstack_da545e4e-8f60-4fb5-93e8-d9e9014c3c74_0(52329c0d8f36030852be880478a53d28f55774491e06e6b9dec2fdb7d930f6ab): error adding pod openstack_ovn-controller-qjpvf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"52329c0d8f36030852be880478a53d28f55774491e06e6b9dec2fdb7d930f6ab" Netns:"/var/run/netns/d14a005b-7469-43d5-b6d1-e0de7336a179" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovn-controller-qjpvf;K8S_POD_INFRA_CONTAINER_ID=52329c0d8f36030852be880478a53d28f55774491e06e6b9dec2fdb7d930f6ab;K8S_POD_UID=da545e4e-8f60-4fb5-93e8-d9e9014c3c74" Path:"" ERRORED: error configuring pod [openstack/ovn-controller-qjpvf] networking: Multus: [openstack/ovn-controller-qjpvf/da545e4e-8f60-4fb5-93e8-d9e9014c3c74]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ovn-controller-qjpvf in out of cluster comm: SetNetworkStatus: failed to update the pod ovn-controller-qjpvf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-qjpvf?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:42 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:42 crc kubenswrapper[4813]: > pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.174064 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:50:42 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovn-controller-qjpvf_openstack_da545e4e-8f60-4fb5-93e8-d9e9014c3c74_0(52329c0d8f36030852be880478a53d28f55774491e06e6b9dec2fdb7d930f6ab): error adding pod openstack_ovn-controller-qjpvf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"52329c0d8f36030852be880478a53d28f55774491e06e6b9dec2fdb7d930f6ab" Netns:"/var/run/netns/d14a005b-7469-43d5-b6d1-e0de7336a179" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovn-controller-qjpvf;K8S_POD_INFRA_CONTAINER_ID=52329c0d8f36030852be880478a53d28f55774491e06e6b9dec2fdb7d930f6ab;K8S_POD_UID=da545e4e-8f60-4fb5-93e8-d9e9014c3c74" Path:"" ERRORED: error configuring pod [openstack/ovn-controller-qjpvf] networking: Multus: [openstack/ovn-controller-qjpvf/da545e4e-8f60-4fb5-93e8-d9e9014c3c74]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ovn-controller-qjpvf in out of cluster comm: SetNetworkStatus: failed to update the pod ovn-controller-qjpvf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-qjpvf?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:42 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:42 crc kubenswrapper[4813]: > pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.174122 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ovn-controller-qjpvf_openstack(da545e4e-8f60-4fb5-93e8-d9e9014c3c74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ovn-controller-qjpvf_openstack(da545e4e-8f60-4fb5-93e8-d9e9014c3c74)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovn-controller-qjpvf_openstack_da545e4e-8f60-4fb5-93e8-d9e9014c3c74_0(52329c0d8f36030852be880478a53d28f55774491e06e6b9dec2fdb7d930f6ab): error adding pod openstack_ovn-controller-qjpvf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"52329c0d8f36030852be880478a53d28f55774491e06e6b9dec2fdb7d930f6ab\\\" Netns:\\\"/var/run/netns/d14a005b-7469-43d5-b6d1-e0de7336a179\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovn-controller-qjpvf;K8S_POD_INFRA_CONTAINER_ID=52329c0d8f36030852be880478a53d28f55774491e06e6b9dec2fdb7d930f6ab;K8S_POD_UID=da545e4e-8f60-4fb5-93e8-d9e9014c3c74\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/ovn-controller-qjpvf] networking: Multus: [openstack/ovn-controller-qjpvf/da545e4e-8f60-4fb5-93e8-d9e9014c3c74]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ovn-controller-qjpvf in out of cluster comm: SetNetworkStatus: failed to update the pod ovn-controller-qjpvf in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-qjpvf?timeout=1m0s\\\": dial tcp 38.129.56.91:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.267918 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkv86\" (UniqueName: \"kubernetes.io/projected/62da3927-ddca-4922-8e9b-c96d06c44c31-kube-api-access-pkv86\") on node \"crc\" DevicePath \"\"" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.267967 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62da3927-ddca-4922-8e9b-c96d06c44c31-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.325657 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1fa4933e3353c6a5dd9ba8d2ba6b8de25f3628327d8f6c94aa922b06ef575b97"} Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.329579 4813 generic.go:334] "Generic (PLEG): container finished" podID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" containerID="f829d6ac06cc4bb482f15385295bf7d72531134c4f45902c0eb550e1d6517fd1" exitCode=1 Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.329635 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" event={"ID":"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b","Type":"ContainerDied","Data":"f829d6ac06cc4bb482f15385295bf7d72531134c4f45902c0eb550e1d6517fd1"} Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.329665 4813 scope.go:117] "RemoveContainer" containerID="0bea679701fb92dd51b86000dddec84983c7baac6e6090c8a3567ede6024ce13" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.330151 4813 scope.go:117] "RemoveContainer" containerID="f829d6ac06cc4bb482f15385295bf7d72531134c4f45902c0eb550e1d6517fd1" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.330373 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-6b84b955f5-mmrm7_metallb-system(a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b)\"" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.330500 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.331167 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.331613 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.331883 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.331961 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.331971 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" event={"ID":"69f8a703-848a-4de9-a102-81426dcd6c3a","Type":"ContainerDied","Data":"875c1ecba0feb11aac2fe4afeaee39d32b05e1599f00f3a508ab5f7b98b30e41"} Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.332454 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.332916 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.333212 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.333585 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.333603 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" event={"ID":"62da3927-ddca-4922-8e9b-c96d06c44c31","Type":"ContainerDied","Data":"e5fc344c62bb3f5976f21dd1d9fb10e48119f7fe6e6763599c2c5b6bf1ea4320"} Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.333622 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.333622 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.333727 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.333584 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.333848 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.333927 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.334129 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.334192 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.334453 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.334516 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.334562 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.334782 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.334998 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.335181 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.335321 4813 status_manager.go:851] "Failed to get status for pod" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-666b6646f7-hdpgf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.335752 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.336050 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.336335 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.336721 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.343503 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.350110 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.350577 4813 status_manager.go:851] "Failed to get status for pod" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-666b6646f7-hdpgf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.351010 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.351332 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.351639 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.352119 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.360446 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.361077 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.361556 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.361915 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.362275 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: I1125 10:50:42.362617 4813 status_manager.go:851] "Failed to get status for pod" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-666b6646f7-hdpgf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.389228 4813 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/mysql-db-openstack-galera-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/mysql-db-openstack-galera-0\": dial tcp 38.129.56.91:6443: connect: connection refused" pod="openstack/openstack-galera-0" volumeName="mysql-db" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.390363 4813 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/mysql-db-openstack-cell1-galera-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/mysql-db-openstack-cell1-galera-0\": dial tcp 38.129.56.91:6443: connect: connection refused" pod="openstack/openstack-cell1-galera-0" volumeName="mysql-db" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.666015 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.666614 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjvcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-6bkfh_openstack(fe34d8fb-5b40-4191-8015-acb5ed8ea562): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.668932 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" podUID="fe34d8fb-5b40-4191-8015-acb5ed8ea562" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.686630 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:50:42Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:50:42Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:50:42Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T10:50:42Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.687013 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.687186 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.687364 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.687506 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:42 crc kubenswrapper[4813]: E1125 10:50:42.687517 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.238980 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-galera-0_openstack_9005be17-9874-4f4f-bd91-39b3c74314ec_0(034b92dc37b0cab4feb2681b666b06cfe30873a87a6ea663dc2e32bf2dd056e8): error adding pod openstack_openstack-galera-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"034b92dc37b0cab4feb2681b666b06cfe30873a87a6ea663dc2e32bf2dd056e8" Netns:"/var/run/netns/7f8c8e63-ca9b-42ca-a314-2e156045abc2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-galera-0;K8S_POD_INFRA_CONTAINER_ID=034b92dc37b0cab4feb2681b666b06cfe30873a87a6ea663dc2e32bf2dd056e8;K8S_POD_UID=9005be17-9874-4f4f-bd91-39b3c74314ec" Path:"" ERRORED: error configuring pod [openstack/openstack-galera-0] networking: Multus: [openstack/openstack-galera-0/9005be17-9874-4f4f-bd91-39b3c74314ec]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-galera-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.239376 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-galera-0_openstack_9005be17-9874-4f4f-bd91-39b3c74314ec_0(034b92dc37b0cab4feb2681b666b06cfe30873a87a6ea663dc2e32bf2dd056e8): error adding pod openstack_openstack-galera-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"034b92dc37b0cab4feb2681b666b06cfe30873a87a6ea663dc2e32bf2dd056e8" Netns:"/var/run/netns/7f8c8e63-ca9b-42ca-a314-2e156045abc2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-galera-0;K8S_POD_INFRA_CONTAINER_ID=034b92dc37b0cab4feb2681b666b06cfe30873a87a6ea663dc2e32bf2dd056e8;K8S_POD_UID=9005be17-9874-4f4f-bd91-39b3c74314ec" Path:"" ERRORED: error configuring pod [openstack/openstack-galera-0] networking: Multus: [openstack/openstack-galera-0/9005be17-9874-4f4f-bd91-39b3c74314ec]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-galera-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > pod="openstack/openstack-galera-0" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.239401 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-galera-0_openstack_9005be17-9874-4f4f-bd91-39b3c74314ec_0(034b92dc37b0cab4feb2681b666b06cfe30873a87a6ea663dc2e32bf2dd056e8): error adding pod openstack_openstack-galera-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"034b92dc37b0cab4feb2681b666b06cfe30873a87a6ea663dc2e32bf2dd056e8" Netns:"/var/run/netns/7f8c8e63-ca9b-42ca-a314-2e156045abc2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-galera-0;K8S_POD_INFRA_CONTAINER_ID=034b92dc37b0cab4feb2681b666b06cfe30873a87a6ea663dc2e32bf2dd056e8;K8S_POD_UID=9005be17-9874-4f4f-bd91-39b3c74314ec" Path:"" ERRORED: error configuring pod [openstack/openstack-galera-0] networking: Multus: [openstack/openstack-galera-0/9005be17-9874-4f4f-bd91-39b3c74314ec]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-galera-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > pod="openstack/openstack-galera-0" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.239472 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-galera-0_openstack_9005be17-9874-4f4f-bd91-39b3c74314ec_0(034b92dc37b0cab4feb2681b666b06cfe30873a87a6ea663dc2e32bf2dd056e8): error adding pod openstack_openstack-galera-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"034b92dc37b0cab4feb2681b666b06cfe30873a87a6ea663dc2e32bf2dd056e8\\\" Netns:\\\"/var/run/netns/7f8c8e63-ca9b-42ca-a314-2e156045abc2\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-galera-0;K8S_POD_INFRA_CONTAINER_ID=034b92dc37b0cab4feb2681b666b06cfe30873a87a6ea663dc2e32bf2dd056e8;K8S_POD_UID=9005be17-9874-4f4f-bd91-39b3c74314ec\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/openstack-galera-0] networking: Multus: [openstack/openstack-galera-0/9005be17-9874-4f4f-bd91-39b3c74314ec]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-galera-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0?timeout=1m0s\\\": dial tcp 38.129.56.91:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.270164 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovn-controller-qjpvf_openstack_da545e4e-8f60-4fb5-93e8-d9e9014c3c74_0(dffacfa6430438884bab364fa07e8051811af1317f800fc5cc5d5658c6ff0bf0): error adding pod openstack_ovn-controller-qjpvf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dffacfa6430438884bab364fa07e8051811af1317f800fc5cc5d5658c6ff0bf0" Netns:"/var/run/netns/83285b76-9b5c-4eda-9441-8a0ebed4dfd5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovn-controller-qjpvf;K8S_POD_INFRA_CONTAINER_ID=dffacfa6430438884bab364fa07e8051811af1317f800fc5cc5d5658c6ff0bf0;K8S_POD_UID=da545e4e-8f60-4fb5-93e8-d9e9014c3c74" Path:"" ERRORED: error configuring pod [openstack/ovn-controller-qjpvf] networking: Multus: [openstack/ovn-controller-qjpvf/da545e4e-8f60-4fb5-93e8-d9e9014c3c74]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ovn-controller-qjpvf in out of cluster comm: SetNetworkStatus: failed to update the pod ovn-controller-qjpvf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-qjpvf?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.270221 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovn-controller-qjpvf_openstack_da545e4e-8f60-4fb5-93e8-d9e9014c3c74_0(dffacfa6430438884bab364fa07e8051811af1317f800fc5cc5d5658c6ff0bf0): error adding pod openstack_ovn-controller-qjpvf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dffacfa6430438884bab364fa07e8051811af1317f800fc5cc5d5658c6ff0bf0" Netns:"/var/run/netns/83285b76-9b5c-4eda-9441-8a0ebed4dfd5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovn-controller-qjpvf;K8S_POD_INFRA_CONTAINER_ID=dffacfa6430438884bab364fa07e8051811af1317f800fc5cc5d5658c6ff0bf0;K8S_POD_UID=da545e4e-8f60-4fb5-93e8-d9e9014c3c74" Path:"" ERRORED: error configuring pod [openstack/ovn-controller-qjpvf] networking: Multus: [openstack/ovn-controller-qjpvf/da545e4e-8f60-4fb5-93e8-d9e9014c3c74]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ovn-controller-qjpvf in out of cluster comm: SetNetworkStatus: failed to update the pod ovn-controller-qjpvf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-qjpvf?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.270238 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovn-controller-qjpvf_openstack_da545e4e-8f60-4fb5-93e8-d9e9014c3c74_0(dffacfa6430438884bab364fa07e8051811af1317f800fc5cc5d5658c6ff0bf0): error adding pod openstack_ovn-controller-qjpvf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dffacfa6430438884bab364fa07e8051811af1317f800fc5cc5d5658c6ff0bf0" Netns:"/var/run/netns/83285b76-9b5c-4eda-9441-8a0ebed4dfd5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovn-controller-qjpvf;K8S_POD_INFRA_CONTAINER_ID=dffacfa6430438884bab364fa07e8051811af1317f800fc5cc5d5658c6ff0bf0;K8S_POD_UID=da545e4e-8f60-4fb5-93e8-d9e9014c3c74" Path:"" ERRORED: error configuring pod [openstack/ovn-controller-qjpvf] networking: Multus: [openstack/ovn-controller-qjpvf/da545e4e-8f60-4fb5-93e8-d9e9014c3c74]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ovn-controller-qjpvf in out of cluster comm: SetNetworkStatus: failed to update the pod ovn-controller-qjpvf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-qjpvf?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.270317 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ovn-controller-qjpvf_openstack(da545e4e-8f60-4fb5-93e8-d9e9014c3c74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ovn-controller-qjpvf_openstack(da545e4e-8f60-4fb5-93e8-d9e9014c3c74)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovn-controller-qjpvf_openstack_da545e4e-8f60-4fb5-93e8-d9e9014c3c74_0(dffacfa6430438884bab364fa07e8051811af1317f800fc5cc5d5658c6ff0bf0): error adding pod openstack_ovn-controller-qjpvf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"dffacfa6430438884bab364fa07e8051811af1317f800fc5cc5d5658c6ff0bf0\\\" Netns:\\\"/var/run/netns/83285b76-9b5c-4eda-9441-8a0ebed4dfd5\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovn-controller-qjpvf;K8S_POD_INFRA_CONTAINER_ID=dffacfa6430438884bab364fa07e8051811af1317f800fc5cc5d5658c6ff0bf0;K8S_POD_UID=da545e4e-8f60-4fb5-93e8-d9e9014c3c74\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/ovn-controller-qjpvf] networking: Multus: [openstack/ovn-controller-qjpvf/da545e4e-8f60-4fb5-93e8-d9e9014c3c74]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ovn-controller-qjpvf in out of cluster comm: SetNetworkStatus: failed to update the pod ovn-controller-qjpvf in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-qjpvf?timeout=1m0s\\\": dial tcp 38.129.56.91:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.353278 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf91d2ed-6d43-49b1-8010-1f59f38aea76","Type":"ContainerStarted","Data":"f61649b3a3183e5ea485b01b8d52ba5d8649465776528d41e0e5c9bd61db0694"} Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.355291 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.355486 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"776cb73dc4fa34e0ed6b14e607e94731f8cf2badcb1b600b291cab373353d8a8"} Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.355536 4813 status_manager.go:851] "Failed to get status for pod" podUID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" pod="openstack/rabbitmq-cell1-server-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/rabbitmq-cell1-server-0\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.355797 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.356034 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.356257 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.356471 4813 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.91:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.356506 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.356974 4813 status_manager.go:851] "Failed to get status for pod" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-666b6646f7-hdpgf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.357650 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.357910 4813 status_manager.go:851] "Failed to get status for pod" podUID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" pod="openstack/rabbitmq-cell1-server-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/rabbitmq-cell1-server-0\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.358134 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.358413 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.358599 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.359108 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.359403 4813 status_manager.go:851] "Failed to get status for pod" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-666b6646f7-hdpgf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.362718 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.363798 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba75457f3dc04dd107da0f36771c4ddb6a99ad67e0b4e90dc6270fd5e4f11ef8"} Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.364438 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.365586 4813 status_manager.go:851] "Failed to get status for pod" podUID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" pod="openstack/rabbitmq-cell1-server-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/rabbitmq-cell1-server-0\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.365991 4813 scope.go:117] "RemoveContainer" containerID="f829d6ac06cc4bb482f15385295bf7d72531134c4f45902c0eb550e1d6517fd1" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.366328 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-6b84b955f5-mmrm7_metallb-system(a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b)\"" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.366580 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.366815 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.374896 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.375056 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aea2efa1-cb45-4657-8ea6-efd7799cb0a4","Type":"ContainerStarted","Data":"0701e0271db215ee376c42bd707d0f093ab2b47d3bb0728f5455ec216732a288"} Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.375228 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.375445 4813 status_manager.go:851] "Failed to get status for pod" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-666b6646f7-hdpgf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.375981 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.377104 4813 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="ac5aeee505d0601a846111c886386afb7817936b80e66e0f13908220e9564582" exitCode=0 Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.377166 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"ac5aeee505d0601a846111c886386afb7817936b80e66e0f13908220e9564582"} Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.377249 4813 status_manager.go:851] "Failed to get status for pod" podUID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" pod="openstack/rabbitmq-cell1-server-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/rabbitmq-cell1-server-0\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.377488 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.377528 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.377569 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.377842 4813 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.377890 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.378171 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.378225 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" podUID="fe34d8fb-5b40-4191-8015-acb5ed8ea562" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.378405 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.378617 4813 status_manager.go:851] "Failed to get status for pod" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-666b6646f7-hdpgf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.379028 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.379230 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.379491 4813 status_manager.go:851] "Failed to get status for pod" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-666b6646f7-hdpgf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.379761 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.380078 4813 status_manager.go:851] "Failed to get status for pod" podUID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" pod="openstack/rabbitmq-cell1-server-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/rabbitmq-cell1-server-0\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.380936 4813 status_manager.go:851] "Failed to get status for pod" podUID="aea2efa1-cb45-4657-8ea6-efd7799cb0a4" pod="openstack/rabbitmq-server-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/rabbitmq-server-0\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.381274 4813 status_manager.go:851] "Failed to get status for pod" podUID="fe34d8fb-5b40-4191-8015-acb5ed8ea562" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-57d769cc4f-6bkfh\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.381561 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.381874 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.409649 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-state-metrics-0_openstack_e9030c35-b810-4f59-b1e6-5daec39fcc6d_0(405d8e3ccfc1720fde43d8b9e9ce8ecc5933af542d36c576c5a70d2b526750c6): error adding pod openstack_kube-state-metrics-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"405d8e3ccfc1720fde43d8b9e9ce8ecc5933af542d36c576c5a70d2b526750c6" Netns:"/var/run/netns/502841eb-081c-4808-bf7b-f206d217b325" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=kube-state-metrics-0;K8S_POD_INFRA_CONTAINER_ID=405d8e3ccfc1720fde43d8b9e9ce8ecc5933af542d36c576c5a70d2b526750c6;K8S_POD_UID=e9030c35-b810-4f59-b1e6-5daec39fcc6d" Path:"" ERRORED: error configuring pod [openstack/kube-state-metrics-0] networking: Multus: [openstack/kube-state-metrics-0/e9030c35-b810-4f59-b1e6-5daec39fcc6d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod kube-state-metrics-0 in out of cluster comm: SetNetworkStatus: failed to update the pod kube-state-metrics-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.409742 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-state-metrics-0_openstack_e9030c35-b810-4f59-b1e6-5daec39fcc6d_0(405d8e3ccfc1720fde43d8b9e9ce8ecc5933af542d36c576c5a70d2b526750c6): error adding pod openstack_kube-state-metrics-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"405d8e3ccfc1720fde43d8b9e9ce8ecc5933af542d36c576c5a70d2b526750c6" Netns:"/var/run/netns/502841eb-081c-4808-bf7b-f206d217b325" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=kube-state-metrics-0;K8S_POD_INFRA_CONTAINER_ID=405d8e3ccfc1720fde43d8b9e9ce8ecc5933af542d36c576c5a70d2b526750c6;K8S_POD_UID=e9030c35-b810-4f59-b1e6-5daec39fcc6d" Path:"" ERRORED: error configuring pod [openstack/kube-state-metrics-0] networking: Multus: [openstack/kube-state-metrics-0/e9030c35-b810-4f59-b1e6-5daec39fcc6d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod kube-state-metrics-0 in out of cluster comm: SetNetworkStatus: failed to update the pod kube-state-metrics-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > pod="openstack/kube-state-metrics-0" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.409767 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-state-metrics-0_openstack_e9030c35-b810-4f59-b1e6-5daec39fcc6d_0(405d8e3ccfc1720fde43d8b9e9ce8ecc5933af542d36c576c5a70d2b526750c6): error adding pod openstack_kube-state-metrics-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"405d8e3ccfc1720fde43d8b9e9ce8ecc5933af542d36c576c5a70d2b526750c6" Netns:"/var/run/netns/502841eb-081c-4808-bf7b-f206d217b325" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=kube-state-metrics-0;K8S_POD_INFRA_CONTAINER_ID=405d8e3ccfc1720fde43d8b9e9ce8ecc5933af542d36c576c5a70d2b526750c6;K8S_POD_UID=e9030c35-b810-4f59-b1e6-5daec39fcc6d" Path:"" ERRORED: error configuring pod [openstack/kube-state-metrics-0] networking: Multus: [openstack/kube-state-metrics-0/e9030c35-b810-4f59-b1e6-5daec39fcc6d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod kube-state-metrics-0 in out of cluster comm: SetNetworkStatus: failed to update the pod kube-state-metrics-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > pod="openstack/kube-state-metrics-0" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.409830 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-state-metrics-0_openstack(e9030c35-b810-4f59-b1e6-5daec39fcc6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-state-metrics-0_openstack(e9030c35-b810-4f59-b1e6-5daec39fcc6d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-state-metrics-0_openstack_e9030c35-b810-4f59-b1e6-5daec39fcc6d_0(405d8e3ccfc1720fde43d8b9e9ce8ecc5933af542d36c576c5a70d2b526750c6): error adding pod openstack_kube-state-metrics-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"405d8e3ccfc1720fde43d8b9e9ce8ecc5933af542d36c576c5a70d2b526750c6\\\" Netns:\\\"/var/run/netns/502841eb-081c-4808-bf7b-f206d217b325\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=kube-state-metrics-0;K8S_POD_INFRA_CONTAINER_ID=405d8e3ccfc1720fde43d8b9e9ce8ecc5933af542d36c576c5a70d2b526750c6;K8S_POD_UID=e9030c35-b810-4f59-b1e6-5daec39fcc6d\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/kube-state-metrics-0] networking: Multus: [openstack/kube-state-metrics-0/e9030c35-b810-4f59-b1e6-5daec39fcc6d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod kube-state-metrics-0 in out of cluster comm: SetNetworkStatus: failed to update the pod kube-state-metrics-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0?timeout=1m0s\\\": dial tcp 38.129.56.91:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/kube-state-metrics-0" podUID="e9030c35-b810-4f59-b1e6-5daec39fcc6d" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.410858 4813 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/persistence-rabbitmq-cell1-server-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/persistence-rabbitmq-cell1-server-0\": dial tcp 38.129.56.91:6443: connect: connection refused" pod="openstack/rabbitmq-cell1-server-0" volumeName="persistence" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.411302 4813 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/persistence-rabbitmq-server-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/persistence-rabbitmq-server-0\": dial tcp 38.129.56.91:6443: connect: connection refused" pod="openstack/rabbitmq-server-0" volumeName="persistence" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.436940 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-cell1-galera-0_openstack_0444b7b3-af36-4fca-80c6-8348adc42a58_0(c17c60dd43e73325fcd7aa1b8b29d199964299500ffb69140391318492060073): error adding pod openstack_openstack-cell1-galera-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c17c60dd43e73325fcd7aa1b8b29d199964299500ffb69140391318492060073" Netns:"/var/run/netns/86aba1cb-f465-4469-bd57-19ac673c7b3b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-cell1-galera-0;K8S_POD_INFRA_CONTAINER_ID=c17c60dd43e73325fcd7aa1b8b29d199964299500ffb69140391318492060073;K8S_POD_UID=0444b7b3-af36-4fca-80c6-8348adc42a58" Path:"" ERRORED: error configuring pod [openstack/openstack-cell1-galera-0] networking: Multus: [openstack/openstack-cell1-galera-0/0444b7b3-af36-4fca-80c6-8348adc42a58]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.437007 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-cell1-galera-0_openstack_0444b7b3-af36-4fca-80c6-8348adc42a58_0(c17c60dd43e73325fcd7aa1b8b29d199964299500ffb69140391318492060073): error adding pod openstack_openstack-cell1-galera-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c17c60dd43e73325fcd7aa1b8b29d199964299500ffb69140391318492060073" Netns:"/var/run/netns/86aba1cb-f465-4469-bd57-19ac673c7b3b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-cell1-galera-0;K8S_POD_INFRA_CONTAINER_ID=c17c60dd43e73325fcd7aa1b8b29d199964299500ffb69140391318492060073;K8S_POD_UID=0444b7b3-af36-4fca-80c6-8348adc42a58" Path:"" ERRORED: error configuring pod [openstack/openstack-cell1-galera-0] networking: Multus: [openstack/openstack-cell1-galera-0/0444b7b3-af36-4fca-80c6-8348adc42a58]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.437026 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-cell1-galera-0_openstack_0444b7b3-af36-4fca-80c6-8348adc42a58_0(c17c60dd43e73325fcd7aa1b8b29d199964299500ffb69140391318492060073): error adding pod openstack_openstack-cell1-galera-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c17c60dd43e73325fcd7aa1b8b29d199964299500ffb69140391318492060073" Netns:"/var/run/netns/86aba1cb-f465-4469-bd57-19ac673c7b3b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-cell1-galera-0;K8S_POD_INFRA_CONTAINER_ID=c17c60dd43e73325fcd7aa1b8b29d199964299500ffb69140391318492060073;K8S_POD_UID=0444b7b3-af36-4fca-80c6-8348adc42a58" Path:"" ERRORED: error configuring pod [openstack/openstack-cell1-galera-0] networking: Multus: [openstack/openstack-cell1-galera-0/0444b7b3-af36-4fca-80c6-8348adc42a58]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.437080 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstack-cell1-galera-0_openstack_0444b7b3-af36-4fca-80c6-8348adc42a58_0(c17c60dd43e73325fcd7aa1b8b29d199964299500ffb69140391318492060073): error adding pod openstack_openstack-cell1-galera-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"c17c60dd43e73325fcd7aa1b8b29d199964299500ffb69140391318492060073\\\" Netns:\\\"/var/run/netns/86aba1cb-f465-4469-bd57-19ac673c7b3b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstack-cell1-galera-0;K8S_POD_INFRA_CONTAINER_ID=c17c60dd43e73325fcd7aa1b8b29d199964299500ffb69140391318492060073;K8S_POD_UID=0444b7b3-af36-4fca-80c6-8348adc42a58\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/openstack-cell1-galera-0] networking: Multus: [openstack/openstack-cell1-galera-0/0444b7b3-af36-4fca-80c6-8348adc42a58]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: SetNetworkStatus: failed to update the pod openstack-cell1-galera-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0?timeout=1m0s\\\": dial tcp 38.129.56.91:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.462116 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_memcached-0_openstack_11b88009-8577-4264-afbf-8aee9bfc90f8_0(295c1df0e01938048b842bae6fb2af3e45af6cf5790e094cae799e1d5660eacd): error adding pod openstack_memcached-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"295c1df0e01938048b842bae6fb2af3e45af6cf5790e094cae799e1d5660eacd" Netns:"/var/run/netns/ae30f354-099c-4dc0-9a0b-91fd0bed1f34" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=memcached-0;K8S_POD_INFRA_CONTAINER_ID=295c1df0e01938048b842bae6fb2af3e45af6cf5790e094cae799e1d5660eacd;K8S_POD_UID=11b88009-8577-4264-afbf-8aee9bfc90f8" Path:"" ERRORED: error configuring pod [openstack/memcached-0] networking: Multus: [openstack/memcached-0/11b88009-8577-4264-afbf-8aee9bfc90f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod memcached-0 in out of cluster comm: SetNetworkStatus: failed to update the pod memcached-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/memcached-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.462174 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_memcached-0_openstack_11b88009-8577-4264-afbf-8aee9bfc90f8_0(295c1df0e01938048b842bae6fb2af3e45af6cf5790e094cae799e1d5660eacd): error adding pod openstack_memcached-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"295c1df0e01938048b842bae6fb2af3e45af6cf5790e094cae799e1d5660eacd" Netns:"/var/run/netns/ae30f354-099c-4dc0-9a0b-91fd0bed1f34" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=memcached-0;K8S_POD_INFRA_CONTAINER_ID=295c1df0e01938048b842bae6fb2af3e45af6cf5790e094cae799e1d5660eacd;K8S_POD_UID=11b88009-8577-4264-afbf-8aee9bfc90f8" Path:"" ERRORED: error configuring pod [openstack/memcached-0] networking: Multus: [openstack/memcached-0/11b88009-8577-4264-afbf-8aee9bfc90f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod memcached-0 in out of cluster comm: SetNetworkStatus: failed to update the pod memcached-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/memcached-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > pod="openstack/memcached-0" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.462192 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:50:43 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_memcached-0_openstack_11b88009-8577-4264-afbf-8aee9bfc90f8_0(295c1df0e01938048b842bae6fb2af3e45af6cf5790e094cae799e1d5660eacd): error adding pod openstack_memcached-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"295c1df0e01938048b842bae6fb2af3e45af6cf5790e094cae799e1d5660eacd" Netns:"/var/run/netns/ae30f354-099c-4dc0-9a0b-91fd0bed1f34" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=memcached-0;K8S_POD_INFRA_CONTAINER_ID=295c1df0e01938048b842bae6fb2af3e45af6cf5790e094cae799e1d5660eacd;K8S_POD_UID=11b88009-8577-4264-afbf-8aee9bfc90f8" Path:"" ERRORED: error configuring pod [openstack/memcached-0] networking: Multus: [openstack/memcached-0/11b88009-8577-4264-afbf-8aee9bfc90f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod memcached-0 in out of cluster comm: SetNetworkStatus: failed to update the pod memcached-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/memcached-0?timeout=1m0s": dial tcp 38.129.56.91:6443: connect: connection refused Nov 25 10:50:43 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:50:43 crc kubenswrapper[4813]: > pod="openstack/memcached-0" Nov 25 10:50:43 crc kubenswrapper[4813]: E1125 10:50:43.462238 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"memcached-0_openstack(11b88009-8577-4264-afbf-8aee9bfc90f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"memcached-0_openstack(11b88009-8577-4264-afbf-8aee9bfc90f8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_memcached-0_openstack_11b88009-8577-4264-afbf-8aee9bfc90f8_0(295c1df0e01938048b842bae6fb2af3e45af6cf5790e094cae799e1d5660eacd): error adding pod openstack_memcached-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"295c1df0e01938048b842bae6fb2af3e45af6cf5790e094cae799e1d5660eacd\\\" Netns:\\\"/var/run/netns/ae30f354-099c-4dc0-9a0b-91fd0bed1f34\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=memcached-0;K8S_POD_INFRA_CONTAINER_ID=295c1df0e01938048b842bae6fb2af3e45af6cf5790e094cae799e1d5660eacd;K8S_POD_UID=11b88009-8577-4264-afbf-8aee9bfc90f8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/memcached-0] networking: Multus: [openstack/memcached-0/11b88009-8577-4264-afbf-8aee9bfc90f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod memcached-0 in out of cluster comm: SetNetworkStatus: failed to update the pod memcached-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/memcached-0?timeout=1m0s\\\": dial tcp 38.129.56.91:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/memcached-0" podUID="11b88009-8577-4264-afbf-8aee9bfc90f8" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.629904 4813 status_manager.go:851] "Failed to get status for pod" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-666b6646f7-hdpgf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.630594 4813 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.631144 4813 status_manager.go:851] "Failed to get status for pod" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" pod="openstack/dnsmasq-dns-78dd6ddcc-s5ghf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-78dd6ddcc-s5ghf\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.631484 4813 status_manager.go:851] "Failed to get status for pod" podUID="aea2efa1-cb45-4657-8ea6-efd7799cb0a4" pod="openstack/rabbitmq-server-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/rabbitmq-server-0\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.631762 4813 status_manager.go:851] "Failed to get status for pod" podUID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" pod="openstack/rabbitmq-cell1-server-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/rabbitmq-cell1-server-0\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.632047 4813 status_manager.go:851] "Failed to get status for pod" podUID="fe34d8fb-5b40-4191-8015-acb5ed8ea562" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-57d769cc4f-6bkfh\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.632314 4813 status_manager.go:851] "Failed to get status for pod" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b84b955f5-mmrm7\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.632663 4813 status_manager.go:851] "Failed to get status for pod" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.632992 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.633209 4813 status_manager.go:851] "Failed to get status for pod" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" pod="openstack/dnsmasq-dns-675f4bcbfc-wn82k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/dnsmasq-dns-675f4bcbfc-wn82k\": dial tcp 38.129.56.91:6443: connect: connection refused" Nov 25 10:50:43 crc kubenswrapper[4813]: I1125 10:50:43.815590 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:50:44 crc kubenswrapper[4813]: I1125 10:50:44.392348 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d4ddc587b463bc9569368db1d5b7bdea4d0ee4d46b9152f92e88f08feeb8fb84"} Nov 25 10:50:44 crc kubenswrapper[4813]: I1125 10:50:44.392689 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c1cb040f6c0787a3f9d9bcfb83ed9d257fe2c4e549d859a6552eb60a4ef3e876"} Nov 25 10:50:44 crc kubenswrapper[4813]: I1125 10:50:44.393754 4813 scope.go:117] "RemoveContainer" containerID="f829d6ac06cc4bb482f15385295bf7d72531134c4f45902c0eb550e1d6517fd1" Nov 25 10:50:44 crc kubenswrapper[4813]: E1125 10:50:44.394193 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-6b84b955f5-mmrm7_metallb-system(a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b)\"" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" Nov 25 10:50:44 crc kubenswrapper[4813]: I1125 10:50:44.826731 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Nov 25 10:50:44 crc kubenswrapper[4813]: I1125 10:50:44.826752 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": dial tcp 10.217.0.82:8081: connect: connection refused" Nov 25 10:50:44 crc kubenswrapper[4813]: I1125 10:50:44.834593 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": dial tcp 10.217.0.83:8081: connect: connection refused" Nov 25 10:50:44 crc kubenswrapper[4813]: I1125 10:50:44.834719 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.181712 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podUID="db556642-a360-4559-8cde-7c25d7a893e0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": dial tcp 10.217.0.85:8081: connect: connection refused" Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.181915 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podUID="db556642-a360-4559-8cde-7c25d7a893e0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.411522 4813 generic.go:334] "Generic (PLEG): container finished" podID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" containerID="e1eb0c6c8ed1a13bd8d9f904f6fa9f54b6e8bffa78cd8521b6ff411c256cf6af" exitCode=1 Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.411875 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" event={"ID":"03c63a63-9a46-4bda-941b-8c5ba81a13fe","Type":"ContainerDied","Data":"e1eb0c6c8ed1a13bd8d9f904f6fa9f54b6e8bffa78cd8521b6ff411c256cf6af"} Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.412534 4813 scope.go:117] "RemoveContainer" containerID="e1eb0c6c8ed1a13bd8d9f904f6fa9f54b6e8bffa78cd8521b6ff411c256cf6af" Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.415548 4813 generic.go:334] "Generic (PLEG): container finished" podID="af18e07e-95b3-476f-9604-824c36ae74a5" containerID="842243eec8d9b052ceceececb34b556945beb110af325f4bd64c2f744b4e1647" exitCode=1 Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.415627 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" event={"ID":"af18e07e-95b3-476f-9604-824c36ae74a5","Type":"ContainerDied","Data":"842243eec8d9b052ceceececb34b556945beb110af325f4bd64c2f744b4e1647"} Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.416150 4813 scope.go:117] "RemoveContainer" containerID="842243eec8d9b052ceceececb34b556945beb110af325f4bd64c2f744b4e1647" Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.418709 4813 generic.go:334] "Generic (PLEG): container finished" podID="b69526d6-6616-4536-a228-4cdb57e1881c" containerID="7e6532e096a42d57e3dc09ca3de8f7bdad6af978b55fb5a65084a1ddbdfce036" exitCode=1 Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.418781 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" event={"ID":"b69526d6-6616-4536-a228-4cdb57e1881c","Type":"ContainerDied","Data":"7e6532e096a42d57e3dc09ca3de8f7bdad6af978b55fb5a65084a1ddbdfce036"} Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.419437 4813 scope.go:117] "RemoveContainer" containerID="7e6532e096a42d57e3dc09ca3de8f7bdad6af978b55fb5a65084a1ddbdfce036" Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.421586 4813 generic.go:334] "Generic (PLEG): container finished" podID="db556642-a360-4559-8cde-7c25d7a893e0" containerID="c7da2017fb3bb645d069c5a5e65e5ebecf25da108fecf7e3d41efdb7ffbd8944" exitCode=1 Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.421648 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" event={"ID":"db556642-a360-4559-8cde-7c25d7a893e0","Type":"ContainerDied","Data":"c7da2017fb3bb645d069c5a5e65e5ebecf25da108fecf7e3d41efdb7ffbd8944"} Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.422218 4813 scope.go:117] "RemoveContainer" containerID="c7da2017fb3bb645d069c5a5e65e5ebecf25da108fecf7e3d41efdb7ffbd8944" Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.438541 4813 generic.go:334] "Generic (PLEG): container finished" podID="d4a62556-e6e8-42dc-b7e4-180c40611393" containerID="cd23090653c4496ed50af88277f58037a85197cd76c6f114b1b622608779a790" exitCode=1 Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.438665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" event={"ID":"d4a62556-e6e8-42dc-b7e4-180c40611393","Type":"ContainerDied","Data":"cd23090653c4496ed50af88277f58037a85197cd76c6f114b1b622608779a790"} Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.439448 4813 scope.go:117] "RemoveContainer" containerID="cd23090653c4496ed50af88277f58037a85197cd76c6f114b1b622608779a790" Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.447985 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ac559f2cea8b4a8d632e935e4a54368c3c6b1d688e9cb5b86fa0276883f98265"} Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.449945 4813 generic.go:334] "Generic (PLEG): container finished" podID="a650bdd3-2541-4b76-b5db-64273262bc06" containerID="9d1c0914bdf672c19650bf0626b573178a26fafce469f87675446e083e25d7f1" exitCode=1 Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.450013 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" event={"ID":"a650bdd3-2541-4b76-b5db-64273262bc06","Type":"ContainerDied","Data":"9d1c0914bdf672c19650bf0626b573178a26fafce469f87675446e083e25d7f1"} Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.450601 4813 scope.go:117] "RemoveContainer" containerID="9d1c0914bdf672c19650bf0626b573178a26fafce469f87675446e083e25d7f1" Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.452152 4813 generic.go:334] "Generic (PLEG): container finished" podID="09bd1800-0aaa-4908-ac58-e0890a2a309f" containerID="489e7457fc7880bb6b4a3038d3eee48ec357875285b69839b02853bb748ea343" exitCode=1 Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.452200 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" event={"ID":"09bd1800-0aaa-4908-ac58-e0890a2a309f","Type":"ContainerDied","Data":"489e7457fc7880bb6b4a3038d3eee48ec357875285b69839b02853bb748ea343"} Nov 25 10:50:45 crc kubenswrapper[4813]: I1125 10:50:45.452581 4813 scope.go:117] "RemoveContainer" containerID="489e7457fc7880bb6b4a3038d3eee48ec357875285b69839b02853bb748ea343" Nov 25 10:50:46 crc kubenswrapper[4813]: I1125 10:50:46.501929 4813 generic.go:334] "Generic (PLEG): container finished" podID="9374bbb0-b458-4c1c-a327-67bcbea83045" containerID="e1befc29d4e04337c0bac8394622429b45672d1ff678eecd535a149b5a3d829d" exitCode=1 Nov 25 10:50:46 crc kubenswrapper[4813]: I1125 10:50:46.501974 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" event={"ID":"9374bbb0-b458-4c1c-a327-67bcbea83045","Type":"ContainerDied","Data":"e1befc29d4e04337c0bac8394622429b45672d1ff678eecd535a149b5a3d829d"} Nov 25 10:50:46 crc kubenswrapper[4813]: I1125 10:50:46.502889 4813 scope.go:117] "RemoveContainer" containerID="e1befc29d4e04337c0bac8394622429b45672d1ff678eecd535a149b5a3d829d" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.212135 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" podUID="32603f59-2392-4c3e-9d25-ba1fe7376687" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.70:8081/readyz\": dial tcp 10.217.0.70:8081: connect: connection refused" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.213581 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" podUID="32603f59-2392-4c3e-9d25-ba1fe7376687" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.70:8081/healthz\": dial tcp 10.217.0.70:8081: connect: connection refused" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.513917 4813 generic.go:334] "Generic (PLEG): container finished" podID="b69526d6-6616-4536-a228-4cdb57e1881c" containerID="41b0500f8fda41e041f07f12a281727c6c4654deebb714ace797dfcdc6453b60" exitCode=1 Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.514005 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" event={"ID":"b69526d6-6616-4536-a228-4cdb57e1881c","Type":"ContainerDied","Data":"41b0500f8fda41e041f07f12a281727c6c4654deebb714ace797dfcdc6453b60"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.514071 4813 scope.go:117] "RemoveContainer" containerID="7e6532e096a42d57e3dc09ca3de8f7bdad6af978b55fb5a65084a1ddbdfce036" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.515097 4813 scope.go:117] "RemoveContainer" containerID="41b0500f8fda41e041f07f12a281727c6c4654deebb714ace797dfcdc6453b60" Nov 25 10:50:47 crc kubenswrapper[4813]: E1125 10:50:47.515367 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c6kw6_openstack-operators(b69526d6-6616-4536-a228-4cdb57e1881c)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.522842 4813 generic.go:334] "Generic (PLEG): container finished" podID="db556642-a360-4559-8cde-7c25d7a893e0" containerID="71f03dbe720f3dde3b5d0765c2080a87ad643d916dde889b7465a7f5184107f5" exitCode=1 Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.522936 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" event={"ID":"db556642-a360-4559-8cde-7c25d7a893e0","Type":"ContainerDied","Data":"71f03dbe720f3dde3b5d0765c2080a87ad643d916dde889b7465a7f5184107f5"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.523943 4813 scope.go:117] "RemoveContainer" containerID="71f03dbe720f3dde3b5d0765c2080a87ad643d916dde889b7465a7f5184107f5" Nov 25 10:50:47 crc kubenswrapper[4813]: E1125 10:50:47.524236 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tc2mg_openstack-operators(db556642-a360-4559-8cde-7c25d7a893e0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podUID="db556642-a360-4559-8cde-7c25d7a893e0" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.526147 4813 generic.go:334] "Generic (PLEG): container finished" podID="d4a62556-e6e8-42dc-b7e4-180c40611393" containerID="40fdb2fd01b91262d81b9fa748bb8d4c5cc505a7dfc16986f11d66d70563bf46" exitCode=1 Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.526204 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" event={"ID":"d4a62556-e6e8-42dc-b7e4-180c40611393","Type":"ContainerDied","Data":"40fdb2fd01b91262d81b9fa748bb8d4c5cc505a7dfc16986f11d66d70563bf46"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.526823 4813 scope.go:117] "RemoveContainer" containerID="40fdb2fd01b91262d81b9fa748bb8d4c5cc505a7dfc16986f11d66d70563bf46" Nov 25 10:50:47 crc kubenswrapper[4813]: E1125 10:50:47.527050 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-blrjt_openstack-operators(d4a62556-e6e8-42dc-b7e4-180c40611393)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" podUID="d4a62556-e6e8-42dc-b7e4-180c40611393" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.562675 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5be1239ab56885b98c557003c5ce1aeb5899857c91985f97f56ba1c6ea1d67b8"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.562731 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0b9c23a560e8e7170fd2afb21c0ddee885618e878f4cab653b23434304153356"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.562982 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.563035 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.563052 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.577828 4813 generic.go:334] "Generic (PLEG): container finished" podID="09bd1800-0aaa-4908-ac58-e0890a2a309f" containerID="2ff22368afff2caaf623f9590ad4d44ff7f5d8f168fc61d3db437d96e48f8683" exitCode=1 Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.577907 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" event={"ID":"09bd1800-0aaa-4908-ac58-e0890a2a309f","Type":"ContainerDied","Data":"2ff22368afff2caaf623f9590ad4d44ff7f5d8f168fc61d3db437d96e48f8683"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.578558 4813 scope.go:117] "RemoveContainer" containerID="2ff22368afff2caaf623f9590ad4d44ff7f5d8f168fc61d3db437d96e48f8683" Nov 25 10:50:47 crc kubenswrapper[4813]: E1125 10:50:47.578793 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-5ffc8f797b-hbwwd_openstack-operators(09bd1800-0aaa-4908-ac58-e0890a2a309f)\"" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" podUID="09bd1800-0aaa-4908-ac58-e0890a2a309f" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.601011 4813 generic.go:334] "Generic (PLEG): container finished" podID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" containerID="14eb11c01b6d36ebc30b5c1849b014d01064dc504a5e87489bc440df8d2dba84" exitCode=1 Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.601074 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" event={"ID":"48ea1018-a88f-4ef0-a82f-7e3b012522ec","Type":"ContainerDied","Data":"14eb11c01b6d36ebc30b5c1849b014d01064dc504a5e87489bc440df8d2dba84"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.602305 4813 scope.go:117] "RemoveContainer" containerID="14eb11c01b6d36ebc30b5c1849b014d01064dc504a5e87489bc440df8d2dba84" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.614336 4813 scope.go:117] "RemoveContainer" containerID="c7da2017fb3bb645d069c5a5e65e5ebecf25da108fecf7e3d41efdb7ffbd8944" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.639013 4813 generic.go:334] "Generic (PLEG): container finished" podID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" containerID="cda74bc0da0a48be166875072753f56094069000bba93d31425a2aa47be4245b" exitCode=1 Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.652418 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" event={"ID":"03c63a63-9a46-4bda-941b-8c5ba81a13fe","Type":"ContainerDied","Data":"cda74bc0da0a48be166875072753f56094069000bba93d31425a2aa47be4245b"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.653174 4813 scope.go:117] "RemoveContainer" containerID="cda74bc0da0a48be166875072753f56094069000bba93d31425a2aa47be4245b" Nov 25 10:50:47 crc kubenswrapper[4813]: E1125 10:50:47.653395 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-4wff2_openstack-operators(03c63a63-9a46-4bda-941b-8c5ba81a13fe)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" podUID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.663557 4813 generic.go:334] "Generic (PLEG): container finished" podID="9374bbb0-b458-4c1c-a327-67bcbea83045" containerID="b336d93ef0071b8168144d46bcb0c44981c4be7dc6628753c0c834d82d1cb9a9" exitCode=1 Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.663657 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" event={"ID":"9374bbb0-b458-4c1c-a327-67bcbea83045","Type":"ContainerDied","Data":"b336d93ef0071b8168144d46bcb0c44981c4be7dc6628753c0c834d82d1cb9a9"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.664282 4813 scope.go:117] "RemoveContainer" containerID="b336d93ef0071b8168144d46bcb0c44981c4be7dc6628753c0c834d82d1cb9a9" Nov 25 10:50:47 crc kubenswrapper[4813]: E1125 10:50:47.664492 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-6j272_openstack-operators(9374bbb0-b458-4c1c-a327-67bcbea83045)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.666112 4813 generic.go:334] "Generic (PLEG): container finished" podID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" containerID="e5cae6caa5898754bc6fa96cd83b7f4f38ebb445710e2457a04bde21b6f3350e" exitCode=1 Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.666158 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" event={"ID":"5f9254c7-c8dc-4504-bdf5-264c78e03b0c","Type":"ContainerDied","Data":"e5cae6caa5898754bc6fa96cd83b7f4f38ebb445710e2457a04bde21b6f3350e"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.666472 4813 scope.go:117] "RemoveContainer" containerID="e5cae6caa5898754bc6fa96cd83b7f4f38ebb445710e2457a04bde21b6f3350e" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.668841 4813 generic.go:334] "Generic (PLEG): container finished" podID="a650bdd3-2541-4b76-b5db-64273262bc06" containerID="1140fc40e47e8c0d7c57a6561b72f2fbaac44f04f7b4ed24f1f25b537b03c0ce" exitCode=1 Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.668908 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" event={"ID":"a650bdd3-2541-4b76-b5db-64273262bc06","Type":"ContainerDied","Data":"1140fc40e47e8c0d7c57a6561b72f2fbaac44f04f7b4ed24f1f25b537b03c0ce"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.669225 4813 scope.go:117] "RemoveContainer" containerID="1140fc40e47e8c0d7c57a6561b72f2fbaac44f04f7b4ed24f1f25b537b03c0ce" Nov 25 10:50:47 crc kubenswrapper[4813]: E1125 10:50:47.669407 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-dvfd9_openstack-operators(a650bdd3-2541-4b76-b5db-64273262bc06)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.670225 4813 generic.go:334] "Generic (PLEG): container finished" podID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" containerID="76db09afdbd7878d0e725a85cae6ef51ea46a3e2f3750023c641aa307705dc45" exitCode=1 Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.670276 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" event={"ID":"baf6f7bb-db50-4013-8b77-2b7e4c8101c2","Type":"ContainerDied","Data":"76db09afdbd7878d0e725a85cae6ef51ea46a3e2f3750023c641aa307705dc45"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.670564 4813 scope.go:117] "RemoveContainer" containerID="76db09afdbd7878d0e725a85cae6ef51ea46a3e2f3750023c641aa307705dc45" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.685206 4813 scope.go:117] "RemoveContainer" containerID="cd23090653c4496ed50af88277f58037a85197cd76c6f114b1b622608779a790" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.685351 4813 generic.go:334] "Generic (PLEG): container finished" podID="af18e07e-95b3-476f-9604-824c36ae74a5" containerID="aa8bdca827af36c5e7fb4dc86eaf3de61567a99aac972137a16c2ce4565816ff" exitCode=1 Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.685456 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" event={"ID":"af18e07e-95b3-476f-9604-824c36ae74a5","Type":"ContainerDied","Data":"aa8bdca827af36c5e7fb4dc86eaf3de61567a99aac972137a16c2ce4565816ff"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.686184 4813 scope.go:117] "RemoveContainer" containerID="aa8bdca827af36c5e7fb4dc86eaf3de61567a99aac972137a16c2ce4565816ff" Nov 25 10:50:47 crc kubenswrapper[4813]: E1125 10:50:47.686482 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-8spkk_openstack-operators(af18e07e-95b3-476f-9604-824c36ae74a5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" podUID="af18e07e-95b3-476f-9604-824c36ae74a5" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.692823 4813 generic.go:334] "Generic (PLEG): container finished" podID="32603f59-2392-4c3e-9d25-ba1fe7376687" containerID="68449d85180117c6a9f528c03ef3e4490850e386f64be4f9984c846ecba8bb0e" exitCode=1 Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.692872 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" event={"ID":"32603f59-2392-4c3e-9d25-ba1fe7376687","Type":"ContainerDied","Data":"68449d85180117c6a9f528c03ef3e4490850e386f64be4f9984c846ecba8bb0e"} Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.693369 4813 scope.go:117] "RemoveContainer" containerID="68449d85180117c6a9f528c03ef3e4490850e386f64be4f9984c846ecba8bb0e" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.725051 4813 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.808985 4813 scope.go:117] "RemoveContainer" containerID="489e7457fc7880bb6b4a3038d3eee48ec357875285b69839b02853bb748ea343" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.879808 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.900049 4813 scope.go:117] "RemoveContainer" containerID="e1eb0c6c8ed1a13bd8d9f904f6fa9f54b6e8bffa78cd8521b6ff411c256cf6af" Nov 25 10:50:47 crc kubenswrapper[4813]: I1125 10:50:47.978584 4813 scope.go:117] "RemoveContainer" containerID="e1befc29d4e04337c0bac8394622429b45672d1ff678eecd535a149b5a3d829d" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.059222 4813 scope.go:117] "RemoveContainer" containerID="9d1c0914bdf672c19650bf0626b573178a26fafce469f87675446e083e25d7f1" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.096377 4813 scope.go:117] "RemoveContainer" containerID="842243eec8d9b052ceceececb34b556945beb110af325f4bd64c2f744b4e1647" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.711653 4813 generic.go:334] "Generic (PLEG): container finished" podID="71c5bfc5-a289-4942-bc55-819f06787eb6" containerID="97c3da655b8a78107c3d2c8c2e2c1a9f338de1a59c124f53ea5084c05a530049" exitCode=1 Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.711742 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" event={"ID":"71c5bfc5-a289-4942-bc55-819f06787eb6","Type":"ContainerDied","Data":"97c3da655b8a78107c3d2c8c2e2c1a9f338de1a59c124f53ea5084c05a530049"} Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.712298 4813 scope.go:117] "RemoveContainer" containerID="97c3da655b8a78107c3d2c8c2e2c1a9f338de1a59c124f53ea5084c05a530049" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.714955 4813 generic.go:334] "Generic (PLEG): container finished" podID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" containerID="08c55215eff81a95904e99154d36ad7ea72cad77ec8e4aa5f314e90700fedc4a" exitCode=1 Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.715012 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" event={"ID":"5f9254c7-c8dc-4504-bdf5-264c78e03b0c","Type":"ContainerDied","Data":"08c55215eff81a95904e99154d36ad7ea72cad77ec8e4aa5f314e90700fedc4a"} Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.715058 4813 scope.go:117] "RemoveContainer" containerID="e5cae6caa5898754bc6fa96cd83b7f4f38ebb445710e2457a04bde21b6f3350e" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.715422 4813 scope.go:117] "RemoveContainer" containerID="08c55215eff81a95904e99154d36ad7ea72cad77ec8e4aa5f314e90700fedc4a" Nov 25 10:50:48 crc kubenswrapper[4813]: E1125 10:50:48.715635 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.717082 4813 generic.go:334] "Generic (PLEG): container finished" podID="0a946ff2-f2e3-48c2-ae3b-774a4ea85492" containerID="e3bbdce0fdd885f2702d09e3ce60b294f195cf38c6f9d83a1dc5f311ba72589a" exitCode=1 Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.717123 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" event={"ID":"0a946ff2-f2e3-48c2-ae3b-774a4ea85492","Type":"ContainerDied","Data":"e3bbdce0fdd885f2702d09e3ce60b294f195cf38c6f9d83a1dc5f311ba72589a"} Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.717647 4813 scope.go:117] "RemoveContainer" containerID="e3bbdce0fdd885f2702d09e3ce60b294f195cf38c6f9d83a1dc5f311ba72589a" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.731281 4813 scope.go:117] "RemoveContainer" containerID="40fdb2fd01b91262d81b9fa748bb8d4c5cc505a7dfc16986f11d66d70563bf46" Nov 25 10:50:48 crc kubenswrapper[4813]: E1125 10:50:48.731514 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-blrjt_openstack-operators(d4a62556-e6e8-42dc-b7e4-180c40611393)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" podUID="d4a62556-e6e8-42dc-b7e4-180c40611393" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.737010 4813 generic.go:334] "Generic (PLEG): container finished" podID="eaf6f1c0-6585-4eba-8baf-942ed2503735" containerID="2d86425d39a76afbae7bbc79c5701956f6ea1837959e4367e78b7b49ded3ad6c" exitCode=1 Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.737060 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" event={"ID":"eaf6f1c0-6585-4eba-8baf-942ed2503735","Type":"ContainerDied","Data":"2d86425d39a76afbae7bbc79c5701956f6ea1837959e4367e78b7b49ded3ad6c"} Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.737378 4813 scope.go:117] "RemoveContainer" containerID="2d86425d39a76afbae7bbc79c5701956f6ea1837959e4367e78b7b49ded3ad6c" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.744082 4813 generic.go:334] "Generic (PLEG): container finished" podID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" containerID="d09637388df878cdf2bfa3d5e5d83aa661d35fdf74a36a7300cd4e0118bf3141" exitCode=1 Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.744126 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" event={"ID":"baf6f7bb-db50-4013-8b77-2b7e4c8101c2","Type":"ContainerDied","Data":"d09637388df878cdf2bfa3d5e5d83aa661d35fdf74a36a7300cd4e0118bf3141"} Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.744516 4813 scope.go:117] "RemoveContainer" containerID="d09637388df878cdf2bfa3d5e5d83aa661d35fdf74a36a7300cd4e0118bf3141" Nov 25 10:50:48 crc kubenswrapper[4813]: E1125 10:50:48.744707 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_openstack-operators(baf6f7bb-db50-4013-8b77-2b7e4c8101c2)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podUID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.747759 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" event={"ID":"32603f59-2392-4c3e-9d25-ba1fe7376687","Type":"ContainerStarted","Data":"5f3e2e477e533824824400bc152f64926a852de5e6d80f60a69c8bf67dfc5cf2"} Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.748464 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.750786 4813 generic.go:334] "Generic (PLEG): container finished" podID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" containerID="3bd95e258595cc79eae7f50ce683229c518378d20824c087c3bc89823343ab14" exitCode=1 Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.750851 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" event={"ID":"48ea1018-a88f-4ef0-a82f-7e3b012522ec","Type":"ContainerDied","Data":"3bd95e258595cc79eae7f50ce683229c518378d20824c087c3bc89823343ab14"} Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.751347 4813 scope.go:117] "RemoveContainer" containerID="3bd95e258595cc79eae7f50ce683229c518378d20824c087c3bc89823343ab14" Nov 25 10:50:48 crc kubenswrapper[4813]: E1125 10:50:48.751617 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podUID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.754667 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.754706 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.835231 4813 scope.go:117] "RemoveContainer" containerID="76db09afdbd7878d0e725a85cae6ef51ea46a3e2f3750023c641aa307705dc45" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.909880 4813 scope.go:117] "RemoveContainer" containerID="14eb11c01b6d36ebc30b5c1849b014d01064dc504a5e87489bc440df8d2dba84" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.985261 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.986205 4813 scope.go:117] "RemoveContainer" containerID="2ff22368afff2caaf623f9590ad4d44ff7f5d8f168fc61d3db437d96e48f8683" Nov 25 10:50:48 crc kubenswrapper[4813]: E1125 10:50:48.986512 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-5ffc8f797b-hbwwd_openstack-operators(09bd1800-0aaa-4908-ac58-e0890a2a309f)\"" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" podUID="09bd1800-0aaa-4908-ac58-e0890a2a309f" Nov 25 10:50:48 crc kubenswrapper[4813]: I1125 10:50:48.986581 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:50:49 crc kubenswrapper[4813]: E1125 10:50:49.121977 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bf03402_32ec_423d_a6af_657bc0cfeb15.slice/crio-conmon-f5c803b338997a2127e546a385ad4b1241de953818b9e737d25f6a6ed6ccb80d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaf6f1c0_6585_4eba_8baf_942ed2503735.slice/crio-264694fb3363c363000af0fabea19789703c2886c007b5b5b7dc0911decc420a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaf6f1c0_6585_4eba_8baf_942ed2503735.slice/crio-conmon-264694fb3363c363000af0fabea19789703c2886c007b5b5b7dc0911decc420a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71c5bfc5_a289_4942_bc55_819f06787eb6.slice/crio-conmon-edffe6489b5c73fee9cf8cbb8977f408153e00bc3bbc9a6395f0b2d4426f9935.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa2934d9_d547_49d0_9d06_232120b44fa1.slice/crio-conmon-5f209543dcf3c6c9fb9d1758f99f40344cb7a6cae23f523fdc59a66663c17b52.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71c5bfc5_a289_4942_bc55_819f06787eb6.slice/crio-edffe6489b5c73fee9cf8cbb8977f408153e00bc3bbc9a6395f0b2d4426f9935.scope\": RecentStats: unable to find data in memory cache]" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.643400 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.643475 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.648989 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.770663 4813 generic.go:334] "Generic (PLEG): container finished" podID="49b29226-49bf-4d59-9c7f-998d924bdace" containerID="28398be9153460c6be147a14357556d4ffc02fadeb49a7434f9278106ec30e34" exitCode=1 Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.770745 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" event={"ID":"49b29226-49bf-4d59-9c7f-998d924bdace","Type":"ContainerDied","Data":"28398be9153460c6be147a14357556d4ffc02fadeb49a7434f9278106ec30e34"} Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.771391 4813 scope.go:117] "RemoveContainer" containerID="28398be9153460c6be147a14357556d4ffc02fadeb49a7434f9278106ec30e34" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.773329 4813 generic.go:334] "Generic (PLEG): container finished" podID="eaf6f1c0-6585-4eba-8baf-942ed2503735" containerID="264694fb3363c363000af0fabea19789703c2886c007b5b5b7dc0911decc420a" exitCode=1 Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.773712 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" event={"ID":"eaf6f1c0-6585-4eba-8baf-942ed2503735","Type":"ContainerDied","Data":"264694fb3363c363000af0fabea19789703c2886c007b5b5b7dc0911decc420a"} Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.773776 4813 scope.go:117] "RemoveContainer" containerID="2d86425d39a76afbae7bbc79c5701956f6ea1837959e4367e78b7b49ded3ad6c" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.774477 4813 scope.go:117] "RemoveContainer" containerID="264694fb3363c363000af0fabea19789703c2886c007b5b5b7dc0911decc420a" Nov 25 10:50:49 crc kubenswrapper[4813]: E1125 10:50:49.774774 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-f6dvp_openstack-operators(eaf6f1c0-6585-4eba-8baf-942ed2503735)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" podUID="eaf6f1c0-6585-4eba-8baf-942ed2503735" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.775979 4813 generic.go:334] "Generic (PLEG): container finished" podID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" containerID="8818b6a10215e75e42b1355f20c5537b4b5710923718cbe61419fb5d93da0562" exitCode=1 Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.776030 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" event={"ID":"a31ffbb8-0255-45d6-9125-6cccc7b444ba","Type":"ContainerDied","Data":"8818b6a10215e75e42b1355f20c5537b4b5710923718cbe61419fb5d93da0562"} Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.776538 4813 scope.go:117] "RemoveContainer" containerID="8818b6a10215e75e42b1355f20c5537b4b5710923718cbe61419fb5d93da0562" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.778539 4813 generic.go:334] "Generic (PLEG): container finished" podID="71c5bfc5-a289-4942-bc55-819f06787eb6" containerID="edffe6489b5c73fee9cf8cbb8977f408153e00bc3bbc9a6395f0b2d4426f9935" exitCode=1 Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.778599 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" event={"ID":"71c5bfc5-a289-4942-bc55-819f06787eb6","Type":"ContainerDied","Data":"edffe6489b5c73fee9cf8cbb8977f408153e00bc3bbc9a6395f0b2d4426f9935"} Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.779080 4813 scope.go:117] "RemoveContainer" containerID="edffe6489b5c73fee9cf8cbb8977f408153e00bc3bbc9a6395f0b2d4426f9935" Nov 25 10:50:49 crc kubenswrapper[4813]: E1125 10:50:49.779541 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-547cf68667-6v6dd_openstack-operators(71c5bfc5-a289-4942-bc55-819f06787eb6)\"" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" podUID="71c5bfc5-a289-4942-bc55-819f06787eb6" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.783260 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" event={"ID":"0a946ff2-f2e3-48c2-ae3b-774a4ea85492","Type":"ContainerStarted","Data":"c8ea243d6a587e563a20408527d66d98ba3b886df4004060234b95f5c8225a55"} Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.783482 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.798190 4813 generic.go:334] "Generic (PLEG): container finished" podID="9093a664-86f3-4349-bd13-0a5e4aca8036" containerID="fb049d972ea51300a04a546dddd9759b2d8d961453b95294c932f7a257597f6f" exitCode=1 Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.798264 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" event={"ID":"9093a664-86f3-4349-bd13-0a5e4aca8036","Type":"ContainerDied","Data":"fb049d972ea51300a04a546dddd9759b2d8d961453b95294c932f7a257597f6f"} Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.798807 4813 scope.go:117] "RemoveContainer" containerID="fb049d972ea51300a04a546dddd9759b2d8d961453b95294c932f7a257597f6f" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.801248 4813 generic.go:334] "Generic (PLEG): container finished" podID="06c81a1e-0461-4457-85ea-1a4060423eda" containerID="0eaafd13da0467f35b0a7f4465a2e7e47d34f239a8a0985e6c60e616eecd1fbf" exitCode=1 Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.801321 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" event={"ID":"06c81a1e-0461-4457-85ea-1a4060423eda","Type":"ContainerDied","Data":"0eaafd13da0467f35b0a7f4465a2e7e47d34f239a8a0985e6c60e616eecd1fbf"} Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.801877 4813 scope.go:117] "RemoveContainer" containerID="0eaafd13da0467f35b0a7f4465a2e7e47d34f239a8a0985e6c60e616eecd1fbf" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.808897 4813 generic.go:334] "Generic (PLEG): container finished" podID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" containerID="ff9c57c50ce56d5b51d6abe1535d43cb0fba6d50162bcc69bc19e4bf3d433028" exitCode=1 Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.808943 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" event={"ID":"94c3d2b4-f1bb-402d-a39d-78e16bee970b","Type":"ContainerDied","Data":"ff9c57c50ce56d5b51d6abe1535d43cb0fba6d50162bcc69bc19e4bf3d433028"} Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.809241 4813 scope.go:117] "RemoveContainer" containerID="ff9c57c50ce56d5b51d6abe1535d43cb0fba6d50162bcc69bc19e4bf3d433028" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.811865 4813 generic.go:334] "Generic (PLEG): container finished" podID="efca9205-8a59-45ce-8c50-36b0d0389f12" containerID="e0cb40cfa7225ebe4e4ed8f072806083611272de9074dba01cdf54df049a2187" exitCode=1 Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.811917 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" event={"ID":"efca9205-8a59-45ce-8c50-36b0d0389f12","Type":"ContainerDied","Data":"e0cb40cfa7225ebe4e4ed8f072806083611272de9074dba01cdf54df049a2187"} Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.812210 4813 scope.go:117] "RemoveContainer" containerID="e0cb40cfa7225ebe4e4ed8f072806083611272de9074dba01cdf54df049a2187" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.815712 4813 generic.go:334] "Generic (PLEG): container finished" podID="2bf03402-32ec-423d-a6af-657bc0cfeb15" containerID="f5c803b338997a2127e546a385ad4b1241de953818b9e737d25f6a6ed6ccb80d" exitCode=1 Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.815802 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" event={"ID":"2bf03402-32ec-423d-a6af-657bc0cfeb15","Type":"ContainerDied","Data":"f5c803b338997a2127e546a385ad4b1241de953818b9e737d25f6a6ed6ccb80d"} Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.816534 4813 scope.go:117] "RemoveContainer" containerID="f5c803b338997a2127e546a385ad4b1241de953818b9e737d25f6a6ed6ccb80d" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.820974 4813 generic.go:334] "Generic (PLEG): container finished" podID="7921584b-8ce0-45b8-8a56-ab0fdde43582" containerID="981d6ff1513f5143a0da7118746e1562edac259524f9d63025c633639fcbd4f7" exitCode=1 Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.821043 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" event={"ID":"7921584b-8ce0-45b8-8a56-ab0fdde43582","Type":"ContainerDied","Data":"981d6ff1513f5143a0da7118746e1562edac259524f9d63025c633639fcbd4f7"} Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.821417 4813 scope.go:117] "RemoveContainer" containerID="981d6ff1513f5143a0da7118746e1562edac259524f9d63025c633639fcbd4f7" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.823946 4813 generic.go:334] "Generic (PLEG): container finished" podID="aa2934d9-d547-49d0-9d06-232120b44fa1" containerID="5f209543dcf3c6c9fb9d1758f99f40344cb7a6cae23f523fdc59a66663c17b52" exitCode=1 Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.824271 4813 scope.go:117] "RemoveContainer" containerID="2ff22368afff2caaf623f9590ad4d44ff7f5d8f168fc61d3db437d96e48f8683" Nov 25 10:50:49 crc kubenswrapper[4813]: E1125 10:50:49.824466 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-5ffc8f797b-hbwwd_openstack-operators(09bd1800-0aaa-4908-ac58-e0890a2a309f)\"" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" podUID="09bd1800-0aaa-4908-ac58-e0890a2a309f" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.824503 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" event={"ID":"aa2934d9-d547-49d0-9d06-232120b44fa1","Type":"ContainerDied","Data":"5f209543dcf3c6c9fb9d1758f99f40344cb7a6cae23f523fdc59a66663c17b52"} Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.824761 4813 scope.go:117] "RemoveContainer" containerID="5f209543dcf3c6c9fb9d1758f99f40344cb7a6cae23f523fdc59a66663c17b52" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.825271 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.825285 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.830142 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:50:49 crc kubenswrapper[4813]: I1125 10:50:49.931035 4813 scope.go:117] "RemoveContainer" containerID="97c3da655b8a78107c3d2c8c2e2c1a9f338de1a59c124f53ea5084c05a530049" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.420931 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="498cd933-bb93-4a44-97ba-b46408dc1613" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.622897 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.626892 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:50:50 crc kubenswrapper[4813]: W1125 10:50:50.785350 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bf77eb8_82fb_4ad7_9cf8_57d017a0ce0d.slice/crio-754417fcd67a8a9d23a00bd552127f3c8b2c05ca91b053ddd237ea9421f3072e WatchSource:0}: Error finding container 754417fcd67a8a9d23a00bd552127f3c8b2c05ca91b053ddd237ea9421f3072e: Status 404 returned error can't find the container with id 754417fcd67a8a9d23a00bd552127f3c8b2c05ca91b053ddd237ea9421f3072e Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.837557 4813 generic.go:334] "Generic (PLEG): container finished" podID="efca9205-8a59-45ce-8c50-36b0d0389f12" containerID="ffa68b56ce90c24616b3f49d4d4f7a9b8dff8addac564d9c4798adc5a764af9c" exitCode=1 Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.837650 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" event={"ID":"efca9205-8a59-45ce-8c50-36b0d0389f12","Type":"ContainerDied","Data":"ffa68b56ce90c24616b3f49d4d4f7a9b8dff8addac564d9c4798adc5a764af9c"} Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.838375 4813 scope.go:117] "RemoveContainer" containerID="e0cb40cfa7225ebe4e4ed8f072806083611272de9074dba01cdf54df049a2187" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.839131 4813 scope.go:117] "RemoveContainer" containerID="ffa68b56ce90c24616b3f49d4d4f7a9b8dff8addac564d9c4798adc5a764af9c" Nov 25 10:50:50 crc kubenswrapper[4813]: E1125 10:50:50.839408 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-jcjzx_openstack-operators(efca9205-8a59-45ce-8c50-36b0d0389f12)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" podUID="efca9205-8a59-45ce-8c50-36b0d0389f12" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.843967 4813 generic.go:334] "Generic (PLEG): container finished" podID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" containerID="6ab88f58405febfc4b2658925fe24837daf2715de3bde72851222fc9b6284fca" exitCode=1 Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.844241 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" event={"ID":"94c3d2b4-f1bb-402d-a39d-78e16bee970b","Type":"ContainerDied","Data":"6ab88f58405febfc4b2658925fe24837daf2715de3bde72851222fc9b6284fca"} Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.845689 4813 scope.go:117] "RemoveContainer" containerID="6ab88f58405febfc4b2658925fe24837daf2715de3bde72851222fc9b6284fca" Nov 25 10:50:50 crc kubenswrapper[4813]: E1125 10:50:50.846090 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-fjkzd_openstack-operators(94c3d2b4-f1bb-402d-a39d-78e16bee970b)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" podUID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.847775 4813 generic.go:334] "Generic (PLEG): container finished" podID="7921584b-8ce0-45b8-8a56-ab0fdde43582" containerID="8930151d465ee02dbf1e9291a7bc76a3b99ca6cc4c297b46b49817360947303d" exitCode=1 Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.847826 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" event={"ID":"7921584b-8ce0-45b8-8a56-ab0fdde43582","Type":"ContainerDied","Data":"8930151d465ee02dbf1e9291a7bc76a3b99ca6cc4c297b46b49817360947303d"} Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.848591 4813 scope.go:117] "RemoveContainer" containerID="8930151d465ee02dbf1e9291a7bc76a3b99ca6cc4c297b46b49817360947303d" Nov 25 10:50:50 crc kubenswrapper[4813]: E1125 10:50:50.848885 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-76j46_openstack-operators(7921584b-8ce0-45b8-8a56-ab0fdde43582)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" podUID="7921584b-8ce0-45b8-8a56-ab0fdde43582" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.852156 4813 generic.go:334] "Generic (PLEG): container finished" podID="aa2934d9-d547-49d0-9d06-232120b44fa1" containerID="cc703f9838f9f56a7bd18de6e62a1a2bd7339f8373a7b51090ffaf5f2395482c" exitCode=1 Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.852227 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" event={"ID":"aa2934d9-d547-49d0-9d06-232120b44fa1","Type":"ContainerDied","Data":"cc703f9838f9f56a7bd18de6e62a1a2bd7339f8373a7b51090ffaf5f2395482c"} Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.852857 4813 scope.go:117] "RemoveContainer" containerID="cc703f9838f9f56a7bd18de6e62a1a2bd7339f8373a7b51090ffaf5f2395482c" Nov 25 10:50:50 crc kubenswrapper[4813]: E1125 10:50:50.853134 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-hjqzd_openstack-operators(aa2934d9-d547-49d0-9d06-232120b44fa1)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" podUID="aa2934d9-d547-49d0-9d06-232120b44fa1" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.855265 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" event={"ID":"49b29226-49bf-4d59-9c7f-998d924bdace","Type":"ContainerStarted","Data":"b8b815a71e6cffe063d0171ec06602fc28926b8f9063be953b6af14a37c8e3fe"} Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.855458 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.857640 4813 generic.go:334] "Generic (PLEG): container finished" podID="2bf03402-32ec-423d-a6af-657bc0cfeb15" containerID="da5fb8b8e603c05b9d5c4ba2ad623ce2ce8911c409e387179676d32d29cf262a" exitCode=1 Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.857716 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" event={"ID":"2bf03402-32ec-423d-a6af-657bc0cfeb15","Type":"ContainerDied","Data":"da5fb8b8e603c05b9d5c4ba2ad623ce2ce8911c409e387179676d32d29cf262a"} Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.858387 4813 scope.go:117] "RemoveContainer" containerID="da5fb8b8e603c05b9d5c4ba2ad623ce2ce8911c409e387179676d32d29cf262a" Nov 25 10:50:50 crc kubenswrapper[4813]: E1125 10:50:50.858657 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-qd4tx_openstack-operators(2bf03402-32ec-423d-a6af-657bc0cfeb15)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.859903 4813 generic.go:334] "Generic (PLEG): container finished" podID="9093a664-86f3-4349-bd13-0a5e4aca8036" containerID="a8658001c24e23d0f354d33250a7b2c39f4e0e69e51a660eff607088497ca501" exitCode=1 Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.859934 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" event={"ID":"9093a664-86f3-4349-bd13-0a5e4aca8036","Type":"ContainerDied","Data":"a8658001c24e23d0f354d33250a7b2c39f4e0e69e51a660eff607088497ca501"} Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.860595 4813 scope.go:117] "RemoveContainer" containerID="a8658001c24e23d0f354d33250a7b2c39f4e0e69e51a660eff607088497ca501" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.861941 4813 generic.go:334] "Generic (PLEG): container finished" podID="06c81a1e-0461-4457-85ea-1a4060423eda" containerID="2eaadf3a93459490dde061b673492d47d51a2c68fb5222284d46e52587ffb474" exitCode=1 Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.861963 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" event={"ID":"06c81a1e-0461-4457-85ea-1a4060423eda","Type":"ContainerDied","Data":"2eaadf3a93459490dde061b673492d47d51a2c68fb5222284d46e52587ffb474"} Nov 25 10:50:50 crc kubenswrapper[4813]: E1125 10:50:50.862396 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-2d2x7_openstack-operators(9093a664-86f3-4349-bd13-0a5e4aca8036)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" podUID="9093a664-86f3-4349-bd13-0a5e4aca8036" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.862553 4813 scope.go:117] "RemoveContainer" containerID="2eaadf3a93459490dde061b673492d47d51a2c68fb5222284d46e52587ffb474" Nov 25 10:50:50 crc kubenswrapper[4813]: E1125 10:50:50.862842 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-858778c9dc-fs9sm_openstack-operators(06c81a1e-0461-4457-85ea-1a4060423eda)\"" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" podUID="06c81a1e-0461-4457-85ea-1a4060423eda" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.864481 4813 generic.go:334] "Generic (PLEG): container finished" podID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" containerID="2f3df6ee96c1826ef338deaed964bd5c5fedf13358353937912ea52c92fa80f8" exitCode=1 Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.864564 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" event={"ID":"a31ffbb8-0255-45d6-9125-6cccc7b444ba","Type":"ContainerDied","Data":"2f3df6ee96c1826ef338deaed964bd5c5fedf13358353937912ea52c92fa80f8"} Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.865529 4813 scope.go:117] "RemoveContainer" containerID="2f3df6ee96c1826ef338deaed964bd5c5fedf13358353937912ea52c92fa80f8" Nov 25 10:50:50 crc kubenswrapper[4813]: E1125 10:50:50.865844 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-gjs27_openstack-operators(a31ffbb8-0255-45d6-9125-6cccc7b444ba)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" podUID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.865862 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kzv7f" event={"ID":"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d","Type":"ContainerStarted","Data":"754417fcd67a8a9d23a00bd552127f3c8b2c05ca91b053ddd237ea9421f3072e"} Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.866351 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.866411 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:50 crc kubenswrapper[4813]: I1125 10:50:50.908211 4813 scope.go:117] "RemoveContainer" containerID="ff9c57c50ce56d5b51d6abe1535d43cb0fba6d50162bcc69bc19e4bf3d433028" Nov 25 10:50:51 crc kubenswrapper[4813]: I1125 10:50:51.028251 4813 scope.go:117] "RemoveContainer" containerID="981d6ff1513f5143a0da7118746e1562edac259524f9d63025c633639fcbd4f7" Nov 25 10:50:51 crc kubenswrapper[4813]: I1125 10:50:51.126896 4813 scope.go:117] "RemoveContainer" containerID="5f209543dcf3c6c9fb9d1758f99f40344cb7a6cae23f523fdc59a66663c17b52" Nov 25 10:50:51 crc kubenswrapper[4813]: I1125 10:50:51.212975 4813 scope.go:117] "RemoveContainer" containerID="f5c803b338997a2127e546a385ad4b1241de953818b9e737d25f6a6ed6ccb80d" Nov 25 10:50:51 crc kubenswrapper[4813]: I1125 10:50:51.345107 4813 scope.go:117] "RemoveContainer" containerID="fb049d972ea51300a04a546dddd9759b2d8d961453b95294c932f7a257597f6f" Nov 25 10:50:51 crc kubenswrapper[4813]: I1125 10:50:51.426221 4813 scope.go:117] "RemoveContainer" containerID="0eaafd13da0467f35b0a7f4465a2e7e47d34f239a8a0985e6c60e616eecd1fbf" Nov 25 10:50:51 crc kubenswrapper[4813]: I1125 10:50:51.515655 4813 scope.go:117] "RemoveContainer" containerID="8818b6a10215e75e42b1355f20c5537b4b5710923718cbe61419fb5d93da0562" Nov 25 10:50:51 crc kubenswrapper[4813]: I1125 10:50:51.916054 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:51 crc kubenswrapper[4813]: I1125 10:50:51.916093 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:50:52 crc kubenswrapper[4813]: I1125 10:50:52.911299 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="498cd933-bb93-4a44-97ba-b46408dc1613" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.343832 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.345566 4813 scope.go:117] "RemoveContainer" containerID="cda74bc0da0a48be166875072753f56094069000bba93d31425a2aa47be4245b" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.345880 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-4wff2_openstack-operators(03c63a63-9a46-4bda-941b-8c5ba81a13fe)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" podUID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.361628 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.362390 4813 scope.go:117] "RemoveContainer" containerID="1140fc40e47e8c0d7c57a6561b72f2fbaac44f04f7b4ed24f1f25b537b03c0ce" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.362654 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-dvfd9_openstack-operators(a650bdd3-2541-4b76-b5db-64273262bc06)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.380776 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.382049 4813 scope.go:117] "RemoveContainer" containerID="cc703f9838f9f56a7bd18de6e62a1a2bd7339f8373a7b51090ffaf5f2395482c" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.382524 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-hjqzd_openstack-operators(aa2934d9-d547-49d0-9d06-232120b44fa1)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" podUID="aa2934d9-d547-49d0-9d06-232120b44fa1" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.414918 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.415658 4813 scope.go:117] "RemoveContainer" containerID="edffe6489b5c73fee9cf8cbb8977f408153e00bc3bbc9a6395f0b2d4426f9935" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.415934 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-547cf68667-6v6dd_openstack-operators(71c5bfc5-a289-4942-bc55-819f06787eb6)\"" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" podUID="71c5bfc5-a289-4942-bc55-819f06787eb6" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.457271 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.458025 4813 scope.go:117] "RemoveContainer" containerID="264694fb3363c363000af0fabea19789703c2886c007b5b5b7dc0911decc420a" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.458235 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-f6dvp_openstack-operators(eaf6f1c0-6585-4eba-8baf-942ed2503735)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" podUID="eaf6f1c0-6585-4eba-8baf-942ed2503735" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.471001 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.471708 4813 scope.go:117] "RemoveContainer" containerID="aa8bdca827af36c5e7fb4dc86eaf3de61567a99aac972137a16c2ce4565816ff" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.471935 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-8spkk_openstack-operators(af18e07e-95b3-476f-9604-824c36ae74a5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" podUID="af18e07e-95b3-476f-9604-824c36ae74a5" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.573903 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.574969 4813 scope.go:117] "RemoveContainer" containerID="40fdb2fd01b91262d81b9fa748bb8d4c5cc505a7dfc16986f11d66d70563bf46" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.575354 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-blrjt_openstack-operators(d4a62556-e6e8-42dc-b7e4-180c40611393)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" podUID="d4a62556-e6e8-42dc-b7e4-180c40611393" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.584779 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.585525 4813 scope.go:117] "RemoveContainer" containerID="2eaadf3a93459490dde061b673492d47d51a2c68fb5222284d46e52587ffb474" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.585818 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-858778c9dc-fs9sm_openstack-operators(06c81a1e-0461-4457-85ea-1a4060423eda)\"" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" podUID="06c81a1e-0461-4457-85ea-1a4060423eda" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.748504 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.749141 4813 scope.go:117] "RemoveContainer" containerID="ffa68b56ce90c24616b3f49d4d4f7a9b8dff8addac564d9c4798adc5a764af9c" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.749543 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-jcjzx_openstack-operators(efca9205-8a59-45ce-8c50-36b0d0389f12)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" podUID="efca9205-8a59-45ce-8c50-36b0d0389f12" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.763746 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.765014 4813 scope.go:117] "RemoveContainer" containerID="8930151d465ee02dbf1e9291a7bc76a3b99ca6cc4c297b46b49817360947303d" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.765293 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-76j46_openstack-operators(7921584b-8ce0-45b8-8a56-ab0fdde43582)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" podUID="7921584b-8ce0-45b8-8a56-ab0fdde43582" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.809197 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.809859 4813 scope.go:117] "RemoveContainer" containerID="d09637388df878cdf2bfa3d5e5d83aa661d35fdf74a36a7300cd4e0118bf3141" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.810091 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_openstack-operators(baf6f7bb-db50-4013-8b77-2b7e4c8101c2)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podUID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.825781 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.826765 4813 scope.go:117] "RemoveContainer" containerID="41b0500f8fda41e041f07f12a281727c6c4654deebb714ace797dfcdc6453b60" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.827069 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c6kw6_openstack-operators(b69526d6-6616-4536-a228-4cdb57e1881c)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.834518 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.835212 4813 scope.go:117] "RemoveContainer" containerID="b336d93ef0071b8168144d46bcb0c44981c4be7dc6628753c0c834d82d1cb9a9" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.835452 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-6j272_openstack-operators(9374bbb0-b458-4c1c-a327-67bcbea83045)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.878104 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.878941 4813 scope.go:117] "RemoveContainer" containerID="2f3df6ee96c1826ef338deaed964bd5c5fedf13358353937912ea52c92fa80f8" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.879333 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-gjs27_openstack-operators(a31ffbb8-0255-45d6-9125-6cccc7b444ba)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" podUID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.942053 4813 generic.go:334] "Generic (PLEG): container finished" podID="78498723-5c73-4aa4-8480-ef20ce8593ac" containerID="23a89a70d897d66967d8b27980922bd55fae76bf3c5409e71c22919ef9f83c19" exitCode=0 Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.942127 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" event={"ID":"78498723-5c73-4aa4-8480-ef20ce8593ac","Type":"ContainerDied","Data":"23a89a70d897d66967d8b27980922bd55fae76bf3c5409e71c22919ef9f83c19"} Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.943819 4813 generic.go:334] "Generic (PLEG): container finished" podID="fe34d8fb-5b40-4191-8015-acb5ed8ea562" containerID="4615a8ee8e91c187c4653e8c28a0bc9ff1603cb48d303069542e282bd550f9af" exitCode=0 Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.943904 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" event={"ID":"fe34d8fb-5b40-4191-8015-acb5ed8ea562","Type":"ContainerDied","Data":"4615a8ee8e91c187c4653e8c28a0bc9ff1603cb48d303069542e282bd550f9af"} Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.947990 4813 generic.go:334] "Generic (PLEG): container finished" podID="9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d" containerID="e5c3d69b7ca4e11bfbf59f8c353644bc363b572f84eea87830e29ccf5972c775" exitCode=0 Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.948038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kzv7f" event={"ID":"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d","Type":"ContainerDied","Data":"e5c3d69b7ca4e11bfbf59f8c353644bc363b572f84eea87830e29ccf5972c775"} Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.990765 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:50:54 crc kubenswrapper[4813]: I1125 10:50:54.992879 4813 scope.go:117] "RemoveContainer" containerID="a8658001c24e23d0f354d33250a7b2c39f4e0e69e51a660eff607088497ca501" Nov 25 10:50:54 crc kubenswrapper[4813]: E1125 10:50:54.993177 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-2d2x7_openstack-operators(9093a664-86f3-4349-bd13-0a5e4aca8036)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" podUID="9093a664-86f3-4349-bd13-0a5e4aca8036" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.029005 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.029903 4813 scope.go:117] "RemoveContainer" containerID="6ab88f58405febfc4b2658925fe24837daf2715de3bde72851222fc9b6284fca" Nov 25 10:50:55 crc kubenswrapper[4813]: E1125 10:50:55.030175 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-fjkzd_openstack-operators(94c3d2b4-f1bb-402d-a39d-78e16bee970b)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" podUID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.060173 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.060850 4813 scope.go:117] "RemoveContainer" containerID="08c55215eff81a95904e99154d36ad7ea72cad77ec8e4aa5f314e90700fedc4a" Nov 25 10:50:55 crc kubenswrapper[4813]: E1125 10:50:55.061086 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.167649 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-cwrzw" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.180619 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.181666 4813 scope.go:117] "RemoveContainer" containerID="71f03dbe720f3dde3b5d0765c2080a87ad643d916dde889b7465a7f5184107f5" Nov 25 10:50:55 crc kubenswrapper[4813]: E1125 10:50:55.181979 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tc2mg_openstack-operators(db556642-a360-4559-8cde-7c25d7a893e0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podUID="db556642-a360-4559-8cde-7c25d7a893e0" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.226055 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.228151 4813 scope.go:117] "RemoveContainer" containerID="3bd95e258595cc79eae7f50ce683229c518378d20824c087c3bc89823343ab14" Nov 25 10:50:55 crc kubenswrapper[4813]: E1125 10:50:55.228493 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podUID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.621011 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.621612 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.621620 4813 scope.go:117] "RemoveContainer" containerID="f829d6ac06cc4bb482f15385295bf7d72531134c4f45902c0eb550e1d6517fd1" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.622589 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.623377 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.957524 4813 generic.go:334] "Generic (PLEG): container finished" podID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" containerID="5e6d99aceef67e79d2faf4c6ce97387949ec0a4024f91f0fec708b9d90c04746" exitCode=1 Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.957999 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" event={"ID":"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b","Type":"ContainerDied","Data":"5e6d99aceef67e79d2faf4c6ce97387949ec0a4024f91f0fec708b9d90c04746"} Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.959278 4813 scope.go:117] "RemoveContainer" containerID="f829d6ac06cc4bb482f15385295bf7d72531134c4f45902c0eb550e1d6517fd1" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.960115 4813 scope.go:117] "RemoveContainer" containerID="5e6d99aceef67e79d2faf4c6ce97387949ec0a4024f91f0fec708b9d90c04746" Nov 25 10:50:55 crc kubenswrapper[4813]: E1125 10:50:55.960461 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-6b84b955f5-mmrm7_metallb-system(a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b)\"" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.969742 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kzv7f" event={"ID":"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d","Type":"ContainerStarted","Data":"22ac3b8909afa1412bf592f7aab576266f1ce3d34c6523e251b6014484212edf"} Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.969795 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kzv7f" event={"ID":"9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d","Type":"ContainerStarted","Data":"923cbf225a6d100d14396386920707e987f576457e3a3eac4152bb0d73e7fd28"} Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.969960 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.969985 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.972425 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" event={"ID":"78498723-5c73-4aa4-8480-ef20ce8593ac","Type":"ContainerStarted","Data":"f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8"} Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.973069 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.976235 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" event={"ID":"fe34d8fb-5b40-4191-8015-acb5ed8ea562","Type":"ContainerStarted","Data":"d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0"} Nov 25 10:50:55 crc kubenswrapper[4813]: I1125 10:50:55.976786 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:50:56 crc kubenswrapper[4813]: W1125 10:50:56.109300 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9005be17_9874_4f4f_bd91_39b3c74314ec.slice/crio-e3abafc42af3aaa08ef276b74578d6a8680f5b5556a7f9a6f9cad14d9433409f WatchSource:0}: Error finding container e3abafc42af3aaa08ef276b74578d6a8680f5b5556a7f9a6f9cad14d9433409f: Status 404 returned error can't find the container with id e3abafc42af3aaa08ef276b74578d6a8680f5b5556a7f9a6f9cad14d9433409f Nov 25 10:50:56 crc kubenswrapper[4813]: W1125 10:50:56.209621 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9030c35_b810_4f59_b1e6_5daec39fcc6d.slice/crio-a58d1c79bbf8863768151ed37d5ebe004b40bf772f1b423d522550d7bb0f415c WatchSource:0}: Error finding container a58d1c79bbf8863768151ed37d5ebe004b40bf772f1b423d522550d7bb0f415c: Status 404 returned error can't find the container with id a58d1c79bbf8863768151ed37d5ebe004b40bf772f1b423d522550d7bb0f415c Nov 25 10:50:56 crc kubenswrapper[4813]: I1125 10:50:56.464262 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-v2clw" Nov 25 10:50:56 crc kubenswrapper[4813]: I1125 10:50:56.621142 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:56 crc kubenswrapper[4813]: I1125 10:50:56.621276 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 10:50:56 crc kubenswrapper[4813]: I1125 10:50:56.622660 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qjpvf" Nov 25 10:50:56 crc kubenswrapper[4813]: I1125 10:50:56.622789 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 10:50:56 crc kubenswrapper[4813]: I1125 10:50:56.995142 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"11b88009-8577-4264-afbf-8aee9bfc90f8","Type":"ContainerStarted","Data":"8dfa5eb364c7222a24968ce942bef16b214badc06f4633efd8b678479bf3d6e0"} Nov 25 10:50:56 crc kubenswrapper[4813]: I1125 10:50:56.996747 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e9030c35-b810-4f59-b1e6-5daec39fcc6d","Type":"ContainerStarted","Data":"a58d1c79bbf8863768151ed37d5ebe004b40bf772f1b423d522550d7bb0f415c"} Nov 25 10:50:56 crc kubenswrapper[4813]: I1125 10:50:56.997878 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerStarted","Data":"e3abafc42af3aaa08ef276b74578d6a8680f5b5556a7f9a6f9cad14d9433409f"} Nov 25 10:50:57 crc kubenswrapper[4813]: I1125 10:50:57.212601 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-577fbd7764-z9m8h" Nov 25 10:50:57 crc kubenswrapper[4813]: W1125 10:50:57.407004 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda545e4e_8f60_4fb5_93e8_d9e9014c3c74.slice/crio-cd6b40f660b27d527e2a9cd2ee91ae5e857b35cb080dc63e63dcdca7aac21d5e WatchSource:0}: Error finding container cd6b40f660b27d527e2a9cd2ee91ae5e857b35cb080dc63e63dcdca7aac21d5e: Status 404 returned error can't find the container with id cd6b40f660b27d527e2a9cd2ee91ae5e857b35cb080dc63e63dcdca7aac21d5e Nov 25 10:50:57 crc kubenswrapper[4813]: I1125 10:50:57.884008 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 10:50:58 crc kubenswrapper[4813]: I1125 10:50:58.010860 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qjpvf" event={"ID":"da545e4e-8f60-4fb5-93e8-d9e9014c3c74","Type":"ContainerStarted","Data":"cd6b40f660b27d527e2a9cd2ee91ae5e857b35cb080dc63e63dcdca7aac21d5e"} Nov 25 10:50:58 crc kubenswrapper[4813]: I1125 10:50:58.621273 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:58 crc kubenswrapper[4813]: I1125 10:50:58.622403 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 10:50:59 crc kubenswrapper[4813]: I1125 10:50:59.804311 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 10:50:59 crc kubenswrapper[4813]: I1125 10:50:59.900353 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 10:50:59 crc kubenswrapper[4813]: I1125 10:50:59.987867 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 10:51:00 crc kubenswrapper[4813]: I1125 10:51:00.182712 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-fsmcq" Nov 25 10:51:01 crc kubenswrapper[4813]: I1125 10:51:01.604993 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 10:51:01 crc kubenswrapper[4813]: I1125 10:51:01.629087 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 10:51:01 crc kubenswrapper[4813]: I1125 10:51:01.850602 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-wqm5s" Nov 25 10:51:01 crc kubenswrapper[4813]: I1125 10:51:01.918525 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7zdfj" Nov 25 10:51:02 crc kubenswrapper[4813]: I1125 10:51:02.509515 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 10:51:02 crc kubenswrapper[4813]: W1125 10:51:02.962639 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0444b7b3_af36_4fca_80c6_8348adc42a58.slice/crio-718ecd3b4df4457bf2d80dd958ad4b55c70f857030e2819145644a60a0ddc7e8 WatchSource:0}: Error finding container 718ecd3b4df4457bf2d80dd958ad4b55c70f857030e2819145644a60a0ddc7e8: Status 404 returned error can't find the container with id 718ecd3b4df4457bf2d80dd958ad4b55c70f857030e2819145644a60a0ddc7e8 Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.043545 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.113315 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"11b88009-8577-4264-afbf-8aee9bfc90f8","Type":"ContainerStarted","Data":"f3852685d531520781eea80d200308d2fa7a74843b8e0d878b58bff9a53a9d32"} Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.113454 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.114946 4813 generic.go:334] "Generic (PLEG): container finished" podID="e9030c35-b810-4f59-b1e6-5daec39fcc6d" containerID="22a818453f90b0ec2e1d917ae353790b07f764575f83a7cd0717d7ac1e9122ed" exitCode=1 Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.115170 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e9030c35-b810-4f59-b1e6-5daec39fcc6d","Type":"ContainerDied","Data":"22a818453f90b0ec2e1d917ae353790b07f764575f83a7cd0717d7ac1e9122ed"} Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.115465 4813 scope.go:117] "RemoveContainer" containerID="22a818453f90b0ec2e1d917ae353790b07f764575f83a7cd0717d7ac1e9122ed" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.117453 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerStarted","Data":"d485ef91cc0c7661e8de48a1695d2002b56d90f65fe6b821940417d9704ee765"} Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.117515 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerStarted","Data":"718ecd3b4df4457bf2d80dd958ad4b55c70f857030e2819145644a60a0ddc7e8"} Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.118659 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerStarted","Data":"c665bcb3ca9b8e3b9b3b67396c7636f5856751171315ecdc828b020fe41d11f7"} Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.121102 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qjpvf" event={"ID":"da545e4e-8f60-4fb5-93e8-d9e9014c3c74","Type":"ContainerStarted","Data":"0a7900d6dda016d5953c898f1247dd97436b71c15d9fca03377281bcd7512401"} Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.121772 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qjpvf" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.170553 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.180162 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.277583 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.469066 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.488933 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.607617 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.654593 4813 scope.go:117] "RemoveContainer" containerID="2ff22368afff2caaf623f9590ad4d44ff7f5d8f168fc61d3db437d96e48f8683" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.815413 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.816631 4813 scope.go:117] "RemoveContainer" containerID="5e6d99aceef67e79d2faf4c6ce97387949ec0a4024f91f0fec708b9d90c04746" Nov 25 10:51:03 crc kubenswrapper[4813]: E1125 10:51:03.816911 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-6b84b955f5-mmrm7_metallb-system(a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b)\"" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.922143 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.929487 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 10:51:03 crc kubenswrapper[4813]: I1125 10:51:03.983756 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.024929 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.127645 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.129972 4813 generic.go:334] "Generic (PLEG): container finished" podID="e9030c35-b810-4f59-b1e6-5daec39fcc6d" containerID="95750d6a122b9d2009ab5f55a56f9dd060d644f88e52fa5d51be75d058873105" exitCode=1 Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.130030 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e9030c35-b810-4f59-b1e6-5daec39fcc6d","Type":"ContainerDied","Data":"95750d6a122b9d2009ab5f55a56f9dd060d644f88e52fa5d51be75d058873105"} Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.130060 4813 scope.go:117] "RemoveContainer" containerID="22a818453f90b0ec2e1d917ae353790b07f764575f83a7cd0717d7ac1e9122ed" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.130906 4813 scope.go:117] "RemoveContainer" containerID="95750d6a122b9d2009ab5f55a56f9dd060d644f88e52fa5d51be75d058873105" Nov 25 10:51:04 crc kubenswrapper[4813]: E1125 10:51:04.131150 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(e9030c35-b810-4f59-b1e6-5daec39fcc6d)\"" pod="openstack/kube-state-metrics-0" podUID="e9030c35-b810-4f59-b1e6-5daec39fcc6d" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.135728 4813 generic.go:334] "Generic (PLEG): container finished" podID="09bd1800-0aaa-4908-ac58-e0890a2a309f" containerID="d1e62b445459b34984999bd018b9ecc5cad36cfc97c7cb8b1e67620067d14695" exitCode=1 Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.136024 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" event={"ID":"09bd1800-0aaa-4908-ac58-e0890a2a309f","Type":"ContainerDied","Data":"d1e62b445459b34984999bd018b9ecc5cad36cfc97c7cb8b1e67620067d14695"} Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.137245 4813 scope.go:117] "RemoveContainer" containerID="d1e62b445459b34984999bd018b9ecc5cad36cfc97c7cb8b1e67620067d14695" Nov 25 10:51:04 crc kubenswrapper[4813]: E1125 10:51:04.137482 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-5ffc8f797b-hbwwd_openstack-operators(09bd1800-0aaa-4908-ac58-e0890a2a309f)\"" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" podUID="09bd1800-0aaa-4908-ac58-e0890a2a309f" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.191476 4813 scope.go:117] "RemoveContainer" containerID="2ff22368afff2caaf623f9590ad4d44ff7f5d8f168fc61d3db437d96e48f8683" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.262192 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.296390 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.324073 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.343803 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.344542 4813 scope.go:117] "RemoveContainer" containerID="cda74bc0da0a48be166875072753f56094069000bba93d31425a2aa47be4245b" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.361146 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.361787 4813 scope.go:117] "RemoveContainer" containerID="1140fc40e47e8c0d7c57a6561b72f2fbaac44f04f7b4ed24f1f25b537b03c0ce" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.381065 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.381485 4813 scope.go:117] "RemoveContainer" containerID="cc703f9838f9f56a7bd18de6e62a1a2bd7339f8373a7b51090ffaf5f2395482c" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.414816 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.415491 4813 scope.go:117] "RemoveContainer" containerID="edffe6489b5c73fee9cf8cbb8977f408153e00bc3bbc9a6395f0b2d4426f9935" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.448289 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.457717 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.458376 4813 scope.go:117] "RemoveContainer" containerID="264694fb3363c363000af0fabea19789703c2886c007b5b5b7dc0911decc420a" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.471376 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.472046 4813 scope.go:117] "RemoveContainer" containerID="aa8bdca827af36c5e7fb4dc86eaf3de61567a99aac972137a16c2ce4565816ff" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.474071 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.496091 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nrhzm" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.526862 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.573021 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.573771 4813 scope.go:117] "RemoveContainer" containerID="40fdb2fd01b91262d81b9fa748bb8d4c5cc505a7dfc16986f11d66d70563bf46" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.590829 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.592013 4813 scope.go:117] "RemoveContainer" containerID="2eaadf3a93459490dde061b673492d47d51a2c68fb5222284d46e52587ffb474" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.621283 4813 scope.go:117] "RemoveContainer" containerID="da5fb8b8e603c05b9d5c4ba2ad623ce2ce8911c409e387179676d32d29cf262a" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.681561 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.683787 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.691175 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.740191 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.747944 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.748793 4813 scope.go:117] "RemoveContainer" containerID="ffa68b56ce90c24616b3f49d4d4f7a9b8dff8addac564d9c4798adc5a764af9c" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.764063 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.764966 4813 scope.go:117] "RemoveContainer" containerID="8930151d465ee02dbf1e9291a7bc76a3b99ca6cc4c297b46b49817360947303d" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.809597 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.810581 4813 scope.go:117] "RemoveContainer" containerID="d09637388df878cdf2bfa3d5e5d83aa661d35fdf74a36a7300cd4e0118bf3141" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.826572 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.827186 4813 scope.go:117] "RemoveContainer" containerID="41b0500f8fda41e041f07f12a281727c6c4654deebb714ace797dfcdc6453b60" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.834956 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.835425 4813 scope.go:117] "RemoveContainer" containerID="b336d93ef0071b8168144d46bcb0c44981c4be7dc6628753c0c834d82d1cb9a9" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.877751 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.878535 4813 scope.go:117] "RemoveContainer" containerID="2f3df6ee96c1826ef338deaed964bd5c5fedf13358353937912ea52c92fa80f8" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.991815 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:51:04 crc kubenswrapper[4813]: I1125 10:51:04.992495 4813 scope.go:117] "RemoveContainer" containerID="a8658001c24e23d0f354d33250a7b2c39f4e0e69e51a660eff607088497ca501" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.029080 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.029963 4813 scope.go:117] "RemoveContainer" containerID="6ab88f58405febfc4b2658925fe24837daf2715de3bde72851222fc9b6284fca" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.057428 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.060040 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.060485 4813 scope.go:117] "RemoveContainer" containerID="08c55215eff81a95904e99154d36ad7ea72cad77ec8e4aa5f314e90700fedc4a" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.120993 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.181731 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.182351 4813 scope.go:117] "RemoveContainer" containerID="71f03dbe720f3dde3b5d0765c2080a87ad643d916dde889b7465a7f5184107f5" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.197108 4813 generic.go:334] "Generic (PLEG): container finished" podID="aa2934d9-d547-49d0-9d06-232120b44fa1" containerID="6adae85f90a1da16b445e1a30fe09db98185ce36b6a45741031a9f7f69e1e630" exitCode=1 Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.197155 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" event={"ID":"aa2934d9-d547-49d0-9d06-232120b44fa1","Type":"ContainerDied","Data":"6adae85f90a1da16b445e1a30fe09db98185ce36b6a45741031a9f7f69e1e630"} Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.197229 4813 scope.go:117] "RemoveContainer" containerID="cc703f9838f9f56a7bd18de6e62a1a2bd7339f8373a7b51090ffaf5f2395482c" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.198107 4813 scope.go:117] "RemoveContainer" containerID="6adae85f90a1da16b445e1a30fe09db98185ce36b6a45741031a9f7f69e1e630" Nov 25 10:51:05 crc kubenswrapper[4813]: E1125 10:51:05.198344 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-hjqzd_openstack-operators(aa2934d9-d547-49d0-9d06-232120b44fa1)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" podUID="aa2934d9-d547-49d0-9d06-232120b44fa1" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.206205 4813 scope.go:117] "RemoveContainer" containerID="95750d6a122b9d2009ab5f55a56f9dd060d644f88e52fa5d51be75d058873105" Nov 25 10:51:05 crc kubenswrapper[4813]: E1125 10:51:05.206653 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(e9030c35-b810-4f59-b1e6-5daec39fcc6d)\"" pod="openstack/kube-state-metrics-0" podUID="e9030c35-b810-4f59-b1e6-5daec39fcc6d" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.232171 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.232881 4813 scope.go:117] "RemoveContainer" containerID="3bd95e258595cc79eae7f50ce683229c518378d20824c087c3bc89823343ab14" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.251354 4813 generic.go:334] "Generic (PLEG): container finished" podID="a650bdd3-2541-4b76-b5db-64273262bc06" containerID="6cfbef3e5911a335e778a25cc22825312e21d3376c549b161d9302f36e73d1b9" exitCode=1 Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.251485 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" event={"ID":"a650bdd3-2541-4b76-b5db-64273262bc06","Type":"ContainerDied","Data":"6cfbef3e5911a335e778a25cc22825312e21d3376c549b161d9302f36e73d1b9"} Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.251581 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.251730 4813 scope.go:117] "RemoveContainer" containerID="6cfbef3e5911a335e778a25cc22825312e21d3376c549b161d9302f36e73d1b9" Nov 25 10:51:05 crc kubenswrapper[4813]: E1125 10:51:05.251954 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-dvfd9_openstack-operators(a650bdd3-2541-4b76-b5db-64273262bc06)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.271251 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-lkjmd" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.278219 4813 generic.go:334] "Generic (PLEG): container finished" podID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" containerID="16460e4f9c43088098ac12f9e10def54db37c1068c6a044a870425a3f19e77b4" exitCode=1 Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.278319 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" event={"ID":"03c63a63-9a46-4bda-941b-8c5ba81a13fe","Type":"ContainerDied","Data":"16460e4f9c43088098ac12f9e10def54db37c1068c6a044a870425a3f19e77b4"} Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.279088 4813 scope.go:117] "RemoveContainer" containerID="16460e4f9c43088098ac12f9e10def54db37c1068c6a044a870425a3f19e77b4" Nov 25 10:51:05 crc kubenswrapper[4813]: E1125 10:51:05.279404 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-4wff2_openstack-operators(03c63a63-9a46-4bda-941b-8c5ba81a13fe)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" podUID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.297981 4813 generic.go:334] "Generic (PLEG): container finished" podID="af18e07e-95b3-476f-9604-824c36ae74a5" containerID="86e726cf9b8333f0660a30b9e6f09b1e7a7dd75a7fe3436c10eff9990aebb19c" exitCode=1 Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.298135 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" event={"ID":"af18e07e-95b3-476f-9604-824c36ae74a5","Type":"ContainerDied","Data":"86e726cf9b8333f0660a30b9e6f09b1e7a7dd75a7fe3436c10eff9990aebb19c"} Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.338603 4813 scope.go:117] "RemoveContainer" containerID="86e726cf9b8333f0660a30b9e6f09b1e7a7dd75a7fe3436c10eff9990aebb19c" Nov 25 10:51:05 crc kubenswrapper[4813]: E1125 10:51:05.339834 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-8spkk_openstack-operators(af18e07e-95b3-476f-9604-824c36ae74a5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" podUID="af18e07e-95b3-476f-9604-824c36ae74a5" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.368077 4813 generic.go:334] "Generic (PLEG): container finished" podID="d4a62556-e6e8-42dc-b7e4-180c40611393" containerID="c79525bf17e1747505b559eb8e125a6012f2aa8ff9aaa37562d972c623d802a0" exitCode=1 Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.368189 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" event={"ID":"d4a62556-e6e8-42dc-b7e4-180c40611393","Type":"ContainerDied","Data":"c79525bf17e1747505b559eb8e125a6012f2aa8ff9aaa37562d972c623d802a0"} Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.369050 4813 scope.go:117] "RemoveContainer" containerID="c79525bf17e1747505b559eb8e125a6012f2aa8ff9aaa37562d972c623d802a0" Nov 25 10:51:05 crc kubenswrapper[4813]: E1125 10:51:05.369250 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-blrjt_openstack-operators(d4a62556-e6e8-42dc-b7e4-180c40611393)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" podUID="d4a62556-e6e8-42dc-b7e4-180c40611393" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.374135 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.397583 4813 generic.go:334] "Generic (PLEG): container finished" podID="71c5bfc5-a289-4942-bc55-819f06787eb6" containerID="aa2d95b74c8b460ce076d792421db9415d752e61eb487f0fbfdbe47d00194d5b" exitCode=1 Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.397667 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" event={"ID":"71c5bfc5-a289-4942-bc55-819f06787eb6","Type":"ContainerDied","Data":"aa2d95b74c8b460ce076d792421db9415d752e61eb487f0fbfdbe47d00194d5b"} Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.398297 4813 scope.go:117] "RemoveContainer" containerID="aa2d95b74c8b460ce076d792421db9415d752e61eb487f0fbfdbe47d00194d5b" Nov 25 10:51:05 crc kubenswrapper[4813]: E1125 10:51:05.398498 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-547cf68667-6v6dd_openstack-operators(71c5bfc5-a289-4942-bc55-819f06787eb6)\"" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" podUID="71c5bfc5-a289-4942-bc55-819f06787eb6" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.438235 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.493851 4813 generic.go:334] "Generic (PLEG): container finished" podID="eaf6f1c0-6585-4eba-8baf-942ed2503735" containerID="214e46a8b9a71b8264e51a0cf7e2d11786fdb4b5d0f1d240813790d9bee31895" exitCode=1 Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.494023 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" event={"ID":"eaf6f1c0-6585-4eba-8baf-942ed2503735","Type":"ContainerDied","Data":"214e46a8b9a71b8264e51a0cf7e2d11786fdb4b5d0f1d240813790d9bee31895"} Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.494726 4813 scope.go:117] "RemoveContainer" containerID="214e46a8b9a71b8264e51a0cf7e2d11786fdb4b5d0f1d240813790d9bee31895" Nov 25 10:51:05 crc kubenswrapper[4813]: E1125 10:51:05.494988 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-f6dvp_openstack-operators(eaf6f1c0-6585-4eba-8baf-942ed2503735)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" podUID="eaf6f1c0-6585-4eba-8baf-942ed2503735" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.505279 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.532024 4813 generic.go:334] "Generic (PLEG): container finished" podID="06c81a1e-0461-4457-85ea-1a4060423eda" containerID="2151bd31d0069b61def43848e29c57b6d08b542f9888b266dabceb722a50f8fa" exitCode=1 Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.532120 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" event={"ID":"06c81a1e-0461-4457-85ea-1a4060423eda","Type":"ContainerDied","Data":"2151bd31d0069b61def43848e29c57b6d08b542f9888b266dabceb722a50f8fa"} Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.537959 4813 scope.go:117] "RemoveContainer" containerID="2151bd31d0069b61def43848e29c57b6d08b542f9888b266dabceb722a50f8fa" Nov 25 10:51:05 crc kubenswrapper[4813]: E1125 10:51:05.538325 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-858778c9dc-fs9sm_openstack-operators(06c81a1e-0461-4457-85ea-1a4060423eda)\"" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" podUID="06c81a1e-0461-4457-85ea-1a4060423eda" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.577033 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" event={"ID":"2bf03402-32ec-423d-a6af-657bc0cfeb15","Type":"ContainerStarted","Data":"e28573f59b3128bbce8c504da372cc321d5d011981900766f25a8cfb8347def3"} Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.577586 4813 scope.go:117] "RemoveContainer" containerID="e28573f59b3128bbce8c504da372cc321d5d011981900766f25a8cfb8347def3" Nov 25 10:51:05 crc kubenswrapper[4813]: E1125 10:51:05.577827 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-qd4tx_openstack-operators(2bf03402-32ec-423d-a6af-657bc0cfeb15)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.580110 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.580497 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.598963 4813 scope.go:117] "RemoveContainer" containerID="1140fc40e47e8c0d7c57a6561b72f2fbaac44f04f7b4ed24f1f25b537b03c0ce" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.641194 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.724022 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.728172 4813 scope.go:117] "RemoveContainer" containerID="cda74bc0da0a48be166875072753f56094069000bba93d31425a2aa47be4245b" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.790712 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.799335 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.849880 4813 scope.go:117] "RemoveContainer" containerID="aa8bdca827af36c5e7fb4dc86eaf3de61567a99aac972137a16c2ce4565816ff" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.944871 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.957088 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-v6s5g" Nov 25 10:51:05 crc kubenswrapper[4813]: I1125 10:51:05.993200 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.020843 4813 scope.go:117] "RemoveContainer" containerID="40fdb2fd01b91262d81b9fa748bb8d4c5cc505a7dfc16986f11d66d70563bf46" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.117046 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jqtfl" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.175369 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.245232 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.309376 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.352234 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.359221 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.416112 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.518496 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.594737 4813 generic.go:334] "Generic (PLEG): container finished" podID="7921584b-8ce0-45b8-8a56-ab0fdde43582" containerID="e4042b093b8fb2490684ba66d53230d906e4682f9e60b297770ef5c653c68a70" exitCode=1 Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.594813 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" event={"ID":"7921584b-8ce0-45b8-8a56-ab0fdde43582","Type":"ContainerDied","Data":"e4042b093b8fb2490684ba66d53230d906e4682f9e60b297770ef5c653c68a70"} Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.595531 4813 scope.go:117] "RemoveContainer" containerID="e4042b093b8fb2490684ba66d53230d906e4682f9e60b297770ef5c653c68a70" Nov 25 10:51:06 crc kubenswrapper[4813]: E1125 10:51:06.595863 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-76j46_openstack-operators(7921584b-8ce0-45b8-8a56-ab0fdde43582)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" podUID="7921584b-8ce0-45b8-8a56-ab0fdde43582" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.602275 4813 generic.go:334] "Generic (PLEG): container finished" podID="9093a664-86f3-4349-bd13-0a5e4aca8036" containerID="852ade3a02bdc4966cc001cd60b4f66a047664199c14930683be6960cadaac48" exitCode=1 Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.602392 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" event={"ID":"9093a664-86f3-4349-bd13-0a5e4aca8036","Type":"ContainerDied","Data":"852ade3a02bdc4966cc001cd60b4f66a047664199c14930683be6960cadaac48"} Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.603401 4813 scope.go:117] "RemoveContainer" containerID="852ade3a02bdc4966cc001cd60b4f66a047664199c14930683be6960cadaac48" Nov 25 10:51:06 crc kubenswrapper[4813]: E1125 10:51:06.603730 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-2d2x7_openstack-operators(9093a664-86f3-4349-bd13-0a5e4aca8036)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" podUID="9093a664-86f3-4349-bd13-0a5e4aca8036" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.607629 4813 generic.go:334] "Generic (PLEG): container finished" podID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" containerID="2a277922b1d2931bedbc476b84bfcd968bab53c7c778a49f36382c68a2a67ab7" exitCode=1 Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.607728 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" event={"ID":"a31ffbb8-0255-45d6-9125-6cccc7b444ba","Type":"ContainerDied","Data":"2a277922b1d2931bedbc476b84bfcd968bab53c7c778a49f36382c68a2a67ab7"} Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.608343 4813 scope.go:117] "RemoveContainer" containerID="2a277922b1d2931bedbc476b84bfcd968bab53c7c778a49f36382c68a2a67ab7" Nov 25 10:51:06 crc kubenswrapper[4813]: E1125 10:51:06.608611 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-gjs27_openstack-operators(a31ffbb8-0255-45d6-9125-6cccc7b444ba)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" podUID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.612606 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" event={"ID":"5f9254c7-c8dc-4504-bdf5-264c78e03b0c","Type":"ContainerStarted","Data":"8439f0e4f753871ca0b6d1cd7f0234c50f6500f26d7675f32dbb5d90cee04305"} Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.615810 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" event={"ID":"db556642-a360-4559-8cde-7c25d7a893e0","Type":"ContainerStarted","Data":"66ec40f15a48177338b909733dfca944bf8b176edec10b3c22c1f9a4cccae5b5"} Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.617697 4813 generic.go:334] "Generic (PLEG): container finished" podID="efca9205-8a59-45ce-8c50-36b0d0389f12" containerID="e433512445f8930eb23fc7cbeaee87d955772ae05eaa6befc87f3a1cc1f105cf" exitCode=1 Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.617765 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" event={"ID":"efca9205-8a59-45ce-8c50-36b0d0389f12","Type":"ContainerDied","Data":"e433512445f8930eb23fc7cbeaee87d955772ae05eaa6befc87f3a1cc1f105cf"} Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.618354 4813 scope.go:117] "RemoveContainer" containerID="e433512445f8930eb23fc7cbeaee87d955772ae05eaa6befc87f3a1cc1f105cf" Nov 25 10:51:06 crc kubenswrapper[4813]: E1125 10:51:06.618584 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-jcjzx_openstack-operators(efca9205-8a59-45ce-8c50-36b0d0389f12)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" podUID="efca9205-8a59-45ce-8c50-36b0d0389f12" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.621817 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" event={"ID":"48ea1018-a88f-4ef0-a82f-7e3b012522ec","Type":"ContainerStarted","Data":"129fb58dceeec99f79108d79e4141e877c5ccddbc95d57d99165becf55b1745d"} Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.627337 4813 generic.go:334] "Generic (PLEG): container finished" podID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" containerID="9a9014db12945f5c91d4957251d5c07fad072365298baa2de399c2d1672f60e6" exitCode=1 Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.627408 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" event={"ID":"94c3d2b4-f1bb-402d-a39d-78e16bee970b","Type":"ContainerDied","Data":"9a9014db12945f5c91d4957251d5c07fad072365298baa2de399c2d1672f60e6"} Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.628406 4813 scope.go:117] "RemoveContainer" containerID="9a9014db12945f5c91d4957251d5c07fad072365298baa2de399c2d1672f60e6" Nov 25 10:51:06 crc kubenswrapper[4813]: E1125 10:51:06.628868 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-fjkzd_openstack-operators(94c3d2b4-f1bb-402d-a39d-78e16bee970b)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" podUID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.631542 4813 generic.go:334] "Generic (PLEG): container finished" podID="9374bbb0-b458-4c1c-a327-67bcbea83045" containerID="992531cc19bfe1ced64390d5b58ded8d348ef8aad2de68f0eb7b8d5f8b4ff0d3" exitCode=1 Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.631616 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" event={"ID":"9374bbb0-b458-4c1c-a327-67bcbea83045","Type":"ContainerDied","Data":"992531cc19bfe1ced64390d5b58ded8d348ef8aad2de68f0eb7b8d5f8b4ff0d3"} Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.632370 4813 scope.go:117] "RemoveContainer" containerID="992531cc19bfe1ced64390d5b58ded8d348ef8aad2de68f0eb7b8d5f8b4ff0d3" Nov 25 10:51:06 crc kubenswrapper[4813]: E1125 10:51:06.632647 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-6j272_openstack-operators(9374bbb0-b458-4c1c-a327-67bcbea83045)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.640924 4813 generic.go:334] "Generic (PLEG): container finished" podID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" containerID="0627009aad30b0ce2e452421ea5038adf0d553e83c703897ef42fc34d1270eb5" exitCode=1 Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.640991 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" event={"ID":"baf6f7bb-db50-4013-8b77-2b7e4c8101c2","Type":"ContainerDied","Data":"0627009aad30b0ce2e452421ea5038adf0d553e83c703897ef42fc34d1270eb5"} Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.641562 4813 scope.go:117] "RemoveContainer" containerID="0627009aad30b0ce2e452421ea5038adf0d553e83c703897ef42fc34d1270eb5" Nov 25 10:51:06 crc kubenswrapper[4813]: E1125 10:51:06.641818 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_openstack-operators(baf6f7bb-db50-4013-8b77-2b7e4c8101c2)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podUID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.644513 4813 generic.go:334] "Generic (PLEG): container finished" podID="b69526d6-6616-4536-a228-4cdb57e1881c" containerID="43d5691a3552e7c2d7e6aa05dd094621e377ecc88488a8e0c5598d77d496a181" exitCode=1 Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.644579 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" event={"ID":"b69526d6-6616-4536-a228-4cdb57e1881c","Type":"ContainerDied","Data":"43d5691a3552e7c2d7e6aa05dd094621e377ecc88488a8e0c5598d77d496a181"} Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.644974 4813 scope.go:117] "RemoveContainer" containerID="43d5691a3552e7c2d7e6aa05dd094621e377ecc88488a8e0c5598d77d496a181" Nov 25 10:51:06 crc kubenswrapper[4813]: E1125 10:51:06.645232 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c6kw6_openstack-operators(b69526d6-6616-4536-a228-4cdb57e1881c)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.646387 4813 generic.go:334] "Generic (PLEG): container finished" podID="2bf03402-32ec-423d-a6af-657bc0cfeb15" containerID="e28573f59b3128bbce8c504da372cc321d5d011981900766f25a8cfb8347def3" exitCode=1 Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.646441 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" event={"ID":"2bf03402-32ec-423d-a6af-657bc0cfeb15","Type":"ContainerDied","Data":"e28573f59b3128bbce8c504da372cc321d5d011981900766f25a8cfb8347def3"} Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.646765 4813 scope.go:117] "RemoveContainer" containerID="e28573f59b3128bbce8c504da372cc321d5d011981900766f25a8cfb8347def3" Nov 25 10:51:06 crc kubenswrapper[4813]: E1125 10:51:06.646925 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-qd4tx_openstack-operators(2bf03402-32ec-423d-a6af-657bc0cfeb15)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.649052 4813 scope.go:117] "RemoveContainer" containerID="6cfbef3e5911a335e778a25cc22825312e21d3376c549b161d9302f36e73d1b9" Nov 25 10:51:06 crc kubenswrapper[4813]: E1125 10:51:06.649228 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-dvfd9_openstack-operators(a650bdd3-2541-4b76-b5db-64273262bc06)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.651599 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.738604 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.896931 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-mfq2v" Nov 25 10:51:06 crc kubenswrapper[4813]: I1125 10:51:06.923103 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.053110 4813 scope.go:117] "RemoveContainer" containerID="edffe6489b5c73fee9cf8cbb8977f408153e00bc3bbc9a6395f0b2d4426f9935" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.080381 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.084978 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.092458 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.131345 4813 scope.go:117] "RemoveContainer" containerID="264694fb3363c363000af0fabea19789703c2886c007b5b5b7dc0911decc420a" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.225429 4813 scope.go:117] "RemoveContainer" containerID="2eaadf3a93459490dde061b673492d47d51a2c68fb5222284d46e52587ffb474" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.248122 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-f46kq" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.276931 4813 scope.go:117] "RemoveContainer" containerID="8930151d465ee02dbf1e9291a7bc76a3b99ca6cc4c297b46b49817360947303d" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.282871 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-gqgjk" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.290461 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.320546 4813 scope.go:117] "RemoveContainer" containerID="a8658001c24e23d0f354d33250a7b2c39f4e0e69e51a660eff607088497ca501" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.348154 4813 scope.go:117] "RemoveContainer" containerID="2f3df6ee96c1826ef338deaed964bd5c5fedf13358353937912ea52c92fa80f8" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.361504 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jb89b" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.361733 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.362555 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.399427 4813 scope.go:117] "RemoveContainer" containerID="ffa68b56ce90c24616b3f49d4d4f7a9b8dff8addac564d9c4798adc5a764af9c" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.416202 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.426181 4813 scope.go:117] "RemoveContainer" containerID="6ab88f58405febfc4b2658925fe24837daf2715de3bde72851222fc9b6284fca" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.479323 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.495474 4813 scope.go:117] "RemoveContainer" containerID="b336d93ef0071b8168144d46bcb0c44981c4be7dc6628753c0c834d82d1cb9a9" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.537997 4813 scope.go:117] "RemoveContainer" containerID="d09637388df878cdf2bfa3d5e5d83aa661d35fdf74a36a7300cd4e0118bf3141" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.566398 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.576491 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.583920 4813 scope.go:117] "RemoveContainer" containerID="41b0500f8fda41e041f07f12a281727c6c4654deebb714ace797dfcdc6453b60" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.639294 4813 scope.go:117] "RemoveContainer" containerID="da5fb8b8e603c05b9d5c4ba2ad623ce2ce8911c409e387179676d32d29cf262a" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.660524 4813 generic.go:334] "Generic (PLEG): container finished" podID="db556642-a360-4559-8cde-7c25d7a893e0" containerID="66ec40f15a48177338b909733dfca944bf8b176edec10b3c22c1f9a4cccae5b5" exitCode=1 Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.660614 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" event={"ID":"db556642-a360-4559-8cde-7c25d7a893e0","Type":"ContainerDied","Data":"66ec40f15a48177338b909733dfca944bf8b176edec10b3c22c1f9a4cccae5b5"} Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.661920 4813 scope.go:117] "RemoveContainer" containerID="66ec40f15a48177338b909733dfca944bf8b176edec10b3c22c1f9a4cccae5b5" Nov 25 10:51:07 crc kubenswrapper[4813]: E1125 10:51:07.662283 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tc2mg_openstack-operators(db556642-a360-4559-8cde-7c25d7a893e0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podUID="db556642-a360-4559-8cde-7c25d7a893e0" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.668479 4813 scope.go:117] "RemoveContainer" containerID="71f03dbe720f3dde3b5d0765c2080a87ad643d916dde889b7465a7f5184107f5" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.672450 4813 generic.go:334] "Generic (PLEG): container finished" podID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" containerID="129fb58dceeec99f79108d79e4141e877c5ccddbc95d57d99165becf55b1745d" exitCode=1 Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.672495 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" event={"ID":"48ea1018-a88f-4ef0-a82f-7e3b012522ec","Type":"ContainerDied","Data":"129fb58dceeec99f79108d79e4141e877c5ccddbc95d57d99165becf55b1745d"} Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.673119 4813 scope.go:117] "RemoveContainer" containerID="129fb58dceeec99f79108d79e4141e877c5ccddbc95d57d99165becf55b1745d" Nov 25 10:51:07 crc kubenswrapper[4813]: E1125 10:51:07.673352 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podUID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.680242 4813 scope.go:117] "RemoveContainer" containerID="852ade3a02bdc4966cc001cd60b4f66a047664199c14930683be6960cadaac48" Nov 25 10:51:07 crc kubenswrapper[4813]: E1125 10:51:07.680653 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-2d2x7_openstack-operators(9093a664-86f3-4349-bd13-0a5e4aca8036)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" podUID="9093a664-86f3-4349-bd13-0a5e4aca8036" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.694521 4813 scope.go:117] "RemoveContainer" containerID="3bd95e258595cc79eae7f50ce683229c518378d20824c087c3bc89823343ab14" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.696180 4813 generic.go:334] "Generic (PLEG): container finished" podID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" containerID="8439f0e4f753871ca0b6d1cd7f0234c50f6500f26d7675f32dbb5d90cee04305" exitCode=1 Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.696217 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" event={"ID":"5f9254c7-c8dc-4504-bdf5-264c78e03b0c","Type":"ContainerDied","Data":"8439f0e4f753871ca0b6d1cd7f0234c50f6500f26d7675f32dbb5d90cee04305"} Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.696851 4813 scope.go:117] "RemoveContainer" containerID="8439f0e4f753871ca0b6d1cd7f0234c50f6500f26d7675f32dbb5d90cee04305" Nov 25 10:51:07 crc kubenswrapper[4813]: E1125 10:51:07.697070 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.715815 4813 scope.go:117] "RemoveContainer" containerID="08c55215eff81a95904e99154d36ad7ea72cad77ec8e4aa5f314e90700fedc4a" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.723550 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.741595 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.771786 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.773037 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-r45mj" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.784971 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.833360 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 10:51:07 crc kubenswrapper[4813]: I1125 10:51:07.918212 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-n68f4" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.083543 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.209917 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.263627 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.451357 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.545268 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.667033 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.707357 4813 scope.go:117] "RemoveContainer" containerID="8439f0e4f753871ca0b6d1cd7f0234c50f6500f26d7675f32dbb5d90cee04305" Nov 25 10:51:08 crc kubenswrapper[4813]: E1125 10:51:08.707872 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.711767 4813 scope.go:117] "RemoveContainer" containerID="66ec40f15a48177338b909733dfca944bf8b176edec10b3c22c1f9a4cccae5b5" Nov 25 10:51:08 crc kubenswrapper[4813]: E1125 10:51:08.711988 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tc2mg_openstack-operators(db556642-a360-4559-8cde-7c25d7a893e0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podUID="db556642-a360-4559-8cde-7c25d7a893e0" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.716128 4813 scope.go:117] "RemoveContainer" containerID="129fb58dceeec99f79108d79e4141e877c5ccddbc95d57d99165becf55b1745d" Nov 25 10:51:08 crc kubenswrapper[4813]: E1125 10:51:08.716311 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podUID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.757651 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.973965 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.985209 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.985282 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:51:08 crc kubenswrapper[4813]: I1125 10:51:08.985920 4813 scope.go:117] "RemoveContainer" containerID="d1e62b445459b34984999bd018b9ecc5cad36cfc97c7cb8b1e67620067d14695" Nov 25 10:51:08 crc kubenswrapper[4813]: E1125 10:51:08.986135 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-5ffc8f797b-hbwwd_openstack-operators(09bd1800-0aaa-4908-ac58-e0890a2a309f)\"" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" podUID="09bd1800-0aaa-4908-ac58-e0890a2a309f" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.055832 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.114735 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.141465 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.195258 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.218541 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.265164 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.265240 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.266033 4813 scope.go:117] "RemoveContainer" containerID="95750d6a122b9d2009ab5f55a56f9dd060d644f88e52fa5d51be75d058873105" Nov 25 10:51:09 crc kubenswrapper[4813]: E1125 10:51:09.266279 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(e9030c35-b810-4f59-b1e6-5daec39fcc6d)\"" pod="openstack/kube-state-metrics-0" podUID="e9030c35-b810-4f59-b1e6-5daec39fcc6d" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.283447 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.293535 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.558742 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.596374 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.636511 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.676287 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.753887 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.761356 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.789171 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.790349 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.832491 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 10:51:09 crc kubenswrapper[4813]: I1125 10:51:09.953562 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.012254 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8qsg4" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.059147 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.107657 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.128877 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.214167 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.331399 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.398099 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.416266 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-qgwvs" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.608052 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.629565 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.635378 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.832092 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.850402 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 10:51:10 crc kubenswrapper[4813]: I1125 10:51:10.924779 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.036048 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.153697 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.214087 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.268697 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.364541 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.397610 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jtc2b" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.477801 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-r77fj" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.533221 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.575253 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.731159 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.770438 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.775160 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.780984 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.834139 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 10:51:11 crc kubenswrapper[4813]: I1125 10:51:11.844140 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.006439 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dw4ph" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.008555 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tsnsm" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.047177 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.071451 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.168355 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.298885 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.342267 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.384730 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.408541 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.458838 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.481849 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.531628 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.555809 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.683290 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.778703 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.780634 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.808583 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.812293 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 10:51:12 crc kubenswrapper[4813]: I1125 10:51:12.863460 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.080337 4813 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.132798 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.234394 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.350662 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.459616 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.565173 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.611767 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.672374 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.766876 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.809697 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.809701 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.900303 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-7htdr" Nov 25 10:51:13 crc kubenswrapper[4813]: I1125 10:51:13.985487 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.021959 4813 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.023271 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" podStartSLOduration=-9223371974.831522 podStartE2EDuration="1m2.023254617s" podCreationTimestamp="2025-11-25 10:50:12 +0000 UTC" firstStartedPulling="2025-11-25 10:50:13.621587408 +0000 UTC m=+1110.751297294" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:50:55.999138491 +0000 UTC m=+1153.128848387" watchObservedRunningTime="2025-11-25 10:51:14.023254617 +0000 UTC m=+1171.152964493" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.023871 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=51.389419377 podStartE2EDuration="57.023867965s" podCreationTimestamp="2025-11-25 10:50:17 +0000 UTC" firstStartedPulling="2025-11-25 10:50:56.903169834 +0000 UTC m=+1154.032879730" lastFinishedPulling="2025-11-25 10:51:02.537618442 +0000 UTC m=+1159.667328318" observedRunningTime="2025-11-25 10:51:03.129029574 +0000 UTC m=+1160.258739470" watchObservedRunningTime="2025-11-25 10:51:14.023867965 +0000 UTC m=+1171.153577841" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.026825 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" podStartSLOduration=20.699755069 podStartE2EDuration="1m1.026817919s" podCreationTimestamp="2025-11-25 10:50:13 +0000 UTC" firstStartedPulling="2025-11-25 10:50:14.047700134 +0000 UTC m=+1111.177410020" lastFinishedPulling="2025-11-25 10:50:54.374762984 +0000 UTC m=+1151.504472870" observedRunningTime="2025-11-25 10:50:56.012532242 +0000 UTC m=+1153.142242138" watchObservedRunningTime="2025-11-25 10:51:14.026817919 +0000 UTC m=+1171.156527805" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.027896 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qjpvf" podStartSLOduration=44.800826954 podStartE2EDuration="50.027890409s" podCreationTimestamp="2025-11-25 10:50:24 +0000 UTC" firstStartedPulling="2025-11-25 10:50:57.409304442 +0000 UTC m=+1154.539014328" lastFinishedPulling="2025-11-25 10:51:02.636367897 +0000 UTC m=+1159.766077783" observedRunningTime="2025-11-25 10:51:03.185018434 +0000 UTC m=+1160.314728320" watchObservedRunningTime="2025-11-25 10:51:14.027890409 +0000 UTC m=+1171.157600295" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.028541 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kzv7f" podStartSLOduration=46.821710875 podStartE2EDuration="50.028534167s" podCreationTimestamp="2025-11-25 10:50:24 +0000 UTC" firstStartedPulling="2025-11-25 10:50:50.788009678 +0000 UTC m=+1147.917719564" lastFinishedPulling="2025-11-25 10:50:53.99483296 +0000 UTC m=+1151.124542856" observedRunningTime="2025-11-25 10:50:56.027154897 +0000 UTC m=+1153.156864793" watchObservedRunningTime="2025-11-25 10:51:14.028534167 +0000 UTC m=+1171.158244053" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.029837 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wn82k","openstack/dnsmasq-dns-78dd6ddcc-s5ghf","openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.029898 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0","openshift-kube-apiserver/kube-apiserver-crc","openshift-must-gather-cdcjj/must-gather-5vk8l","openstack/ovsdbserver-sb-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.030325 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" containerName="installer" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.030345 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.030405 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="86379c39-b839-4552-949c-35431188a3a7" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.030358 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" containerName="installer" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.030671 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c3ebcfb-71d9-4d57-824a-b6468b15791e" containerName="installer" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.039757 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.043145 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.043876 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5tp72" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.043879 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.044215 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.044449 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.050126 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.052848 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.052925 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0","openstack/openstack-galera-0","openstack/openstack-cell1-galera-0","openstack/ovn-controller-qjpvf","openstack/kube-state-metrics-0","openstack/ovn-controller-ovs-kzv7f"] Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.052959 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hdpgf"] Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.053124 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.053802 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" containerName="dnsmasq-dns" containerID="cri-o://f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8" gracePeriod=10 Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.060291 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.061570 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-z64mz" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.061753 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.061875 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.064172 4813 scope.go:117] "RemoveContainer" containerID="95750d6a122b9d2009ab5f55a56f9dd060d644f88e52fa5d51be75d058873105" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.066079 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cdcjj"/"openshift-service-ca.crt" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.066147 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cdcjj"/"kube-root-ca.crt" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.085301 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-t7lft" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.096365 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=27.096309353 podStartE2EDuration="27.096309353s" podCreationTimestamp="2025-11-25 10:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:51:14.089939882 +0000 UTC m=+1171.219649788" watchObservedRunningTime="2025-11-25 10:51:14.096309353 +0000 UTC m=+1171.226019259" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.108731 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=24.108715695 podStartE2EDuration="24.108715695s" podCreationTimestamp="2025-11-25 10:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:51:14.102813337 +0000 UTC m=+1171.232523243" watchObservedRunningTime="2025-11-25 10:51:14.108715695 +0000 UTC m=+1171.238425581" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.145467 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.145815 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/90d80d33-b519-4d67-97ba-1b8b828e917b-must-gather-output\") pod \"must-gather-5vk8l\" (UID: \"90d80d33-b519-4d67-97ba-1b8b828e917b\") " pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.146193 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.146318 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04683d4b-dec7-42f6-9803-b301f1d449c3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.146509 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04683d4b-dec7-42f6-9803-b301f1d449c3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.146711 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.146852 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pqhb\" (UniqueName: \"kubernetes.io/projected/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-kube-api-access-5pqhb\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.147139 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04683d4b-dec7-42f6-9803-b301f1d449c3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.147294 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.147472 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwxrt\" (UniqueName: \"kubernetes.io/projected/04683d4b-dec7-42f6-9803-b301f1d449c3-kube-api-access-nwxrt\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.147636 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04683d4b-dec7-42f6-9803-b301f1d449c3-config\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.147882 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.148077 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.149594 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-config\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.150156 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.150450 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04683d4b-dec7-42f6-9803-b301f1d449c3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.150752 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvtsf\" (UniqueName: \"kubernetes.io/projected/90d80d33-b519-4d67-97ba-1b8b828e917b-kube-api-access-xvtsf\") pod \"must-gather-5vk8l\" (UID: \"90d80d33-b519-4d67-97ba-1b8b828e917b\") " pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.150935 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/04683d4b-dec7-42f6-9803-b301f1d449c3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.169103 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.244762 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-ppl7c" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252422 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252480 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pqhb\" (UniqueName: \"kubernetes.io/projected/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-kube-api-access-5pqhb\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252502 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04683d4b-dec7-42f6-9803-b301f1d449c3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252526 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252557 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwxrt\" (UniqueName: \"kubernetes.io/projected/04683d4b-dec7-42f6-9803-b301f1d449c3-kube-api-access-nwxrt\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252586 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04683d4b-dec7-42f6-9803-b301f1d449c3-config\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252614 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252646 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252671 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-config\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252710 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252729 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04683d4b-dec7-42f6-9803-b301f1d449c3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252757 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvtsf\" (UniqueName: \"kubernetes.io/projected/90d80d33-b519-4d67-97ba-1b8b828e917b-kube-api-access-xvtsf\") pod \"must-gather-5vk8l\" (UID: \"90d80d33-b519-4d67-97ba-1b8b828e917b\") " pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252778 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/04683d4b-dec7-42f6-9803-b301f1d449c3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252813 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252838 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/90d80d33-b519-4d67-97ba-1b8b828e917b-must-gather-output\") pod \"must-gather-5vk8l\" (UID: \"90d80d33-b519-4d67-97ba-1b8b828e917b\") " pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252870 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04683d4b-dec7-42f6-9803-b301f1d449c3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252888 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.252911 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04683d4b-dec7-42f6-9803-b301f1d449c3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.253513 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.254269 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/04683d4b-dec7-42f6-9803-b301f1d449c3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.255140 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.255192 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/90d80d33-b519-4d67-97ba-1b8b828e917b-must-gather-output\") pod \"must-gather-5vk8l\" (UID: \"90d80d33-b519-4d67-97ba-1b8b828e917b\") " pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.255867 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04683d4b-dec7-42f6-9803-b301f1d449c3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.256326 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04683d4b-dec7-42f6-9803-b301f1d449c3-config\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.255142 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.257163 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.257848 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-config\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.259552 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.259867 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.260193 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04683d4b-dec7-42f6-9803-b301f1d449c3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.260477 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.265708 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.266948 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.269870 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04683d4b-dec7-42f6-9803-b301f1d449c3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.272174 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04683d4b-dec7-42f6-9803-b301f1d449c3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.275442 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvtsf\" (UniqueName: \"kubernetes.io/projected/90d80d33-b519-4d67-97ba-1b8b828e917b-kube-api-access-xvtsf\") pod \"must-gather-5vk8l\" (UID: \"90d80d33-b519-4d67-97ba-1b8b828e917b\") " pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.279381 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwxrt\" (UniqueName: \"kubernetes.io/projected/04683d4b-dec7-42f6-9803-b301f1d449c3-kube-api-access-nwxrt\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.287261 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.296756 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pqhb\" (UniqueName: \"kubernetes.io/projected/7c8efda2-acd3-4ecf-9295-0ad8d037ca94-kube-api-access-5pqhb\") pod \"ovsdbserver-sb-0\" (UID: \"7c8efda2-acd3-4ecf-9295-0ad8d037ca94\") " pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.296791 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"04683d4b-dec7-42f6-9803-b301f1d449c3\") " pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.311606 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.343733 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.345475 4813 scope.go:117] "RemoveContainer" containerID="16460e4f9c43088098ac12f9e10def54db37c1068c6a044a870425a3f19e77b4" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.346025 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-4wff2_openstack-operators(03c63a63-9a46-4bda-941b-8c5ba81a13fe)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" podUID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.360795 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.361635 4813 scope.go:117] "RemoveContainer" containerID="6cfbef3e5911a335e778a25cc22825312e21d3376c549b161d9302f36e73d1b9" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.361923 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-dvfd9_openstack-operators(a650bdd3-2541-4b76-b5db-64273262bc06)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.379829 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.380564 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.381467 4813 scope.go:117] "RemoveContainer" containerID="6adae85f90a1da16b445e1a30fe09db98185ce36b6a45741031a9f7f69e1e630" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.381765 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-hjqzd_openstack-operators(aa2934d9-d547-49d0-9d06-232120b44fa1)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" podUID="aa2934d9-d547-49d0-9d06-232120b44fa1" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.402388 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.408763 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.415248 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.416106 4813 scope.go:117] "RemoveContainer" containerID="aa2d95b74c8b460ce076d792421db9415d752e61eb487f0fbfdbe47d00194d5b" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.416387 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.416415 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-547cf68667-6v6dd_openstack-operators(71c5bfc5-a289-4942-bc55-819f06787eb6)\"" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" podUID="71c5bfc5-a289-4942-bc55-819f06787eb6" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.457082 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.457982 4813 scope.go:117] "RemoveContainer" containerID="214e46a8b9a71b8264e51a0cf7e2d11786fdb4b5d0f1d240813790d9bee31895" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.458250 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-f6dvp_openstack-operators(eaf6f1c0-6585-4eba-8baf-942ed2503735)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" podUID="eaf6f1c0-6585-4eba-8baf-942ed2503735" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.471433 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.472431 4813 scope.go:117] "RemoveContainer" containerID="86e726cf9b8333f0660a30b9e6f09b1e7a7dd75a7fe3436c10eff9990aebb19c" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.472720 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-8spkk_openstack-operators(af18e07e-95b3-476f-9604-824c36ae74a5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" podUID="af18e07e-95b3-476f-9604-824c36ae74a5" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.477990 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.498277 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.558579 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78498723-5c73-4aa4-8480-ef20ce8593ac-dns-svc\") pod \"78498723-5c73-4aa4-8480-ef20ce8593ac\" (UID: \"78498723-5c73-4aa4-8480-ef20ce8593ac\") " Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.558902 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7467\" (UniqueName: \"kubernetes.io/projected/78498723-5c73-4aa4-8480-ef20ce8593ac-kube-api-access-q7467\") pod \"78498723-5c73-4aa4-8480-ef20ce8593ac\" (UID: \"78498723-5c73-4aa4-8480-ef20ce8593ac\") " Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.558992 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78498723-5c73-4aa4-8480-ef20ce8593ac-config\") pod \"78498723-5c73-4aa4-8480-ef20ce8593ac\" (UID: \"78498723-5c73-4aa4-8480-ef20ce8593ac\") " Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.563611 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78498723-5c73-4aa4-8480-ef20ce8593ac-kube-api-access-q7467" (OuterVolumeSpecName: "kube-api-access-q7467") pod "78498723-5c73-4aa4-8480-ef20ce8593ac" (UID: "78498723-5c73-4aa4-8480-ef20ce8593ac"). InnerVolumeSpecName "kube-api-access-q7467". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.573973 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.574726 4813 scope.go:117] "RemoveContainer" containerID="c79525bf17e1747505b559eb8e125a6012f2aa8ff9aaa37562d972c623d802a0" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.574985 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-blrjt_openstack-operators(d4a62556-e6e8-42dc-b7e4-180c40611393)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" podUID="d4a62556-e6e8-42dc-b7e4-180c40611393" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.584910 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.585550 4813 scope.go:117] "RemoveContainer" containerID="2151bd31d0069b61def43848e29c57b6d08b542f9888b266dabceb722a50f8fa" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.585835 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-858778c9dc-fs9sm_openstack-operators(06c81a1e-0461-4457-85ea-1a4060423eda)\"" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" podUID="06c81a1e-0461-4457-85ea-1a4060423eda" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.603769 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78498723-5c73-4aa4-8480-ef20ce8593ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78498723-5c73-4aa4-8480-ef20ce8593ac" (UID: "78498723-5c73-4aa4-8480-ef20ce8593ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.613763 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78498723-5c73-4aa4-8480-ef20ce8593ac-config" (OuterVolumeSpecName: "config") pod "78498723-5c73-4aa4-8480-ef20ce8593ac" (UID: "78498723-5c73-4aa4-8480-ef20ce8593ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.662727 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78498723-5c73-4aa4-8480-ef20ce8593ac-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.662768 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7467\" (UniqueName: \"kubernetes.io/projected/78498723-5c73-4aa4-8480-ef20ce8593ac-kube-api-access-q7467\") on node \"crc\" DevicePath \"\"" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.662780 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78498723-5c73-4aa4-8480-ef20ce8593ac-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.707291 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.712267 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.747987 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.749126 4813 scope.go:117] "RemoveContainer" containerID="e433512445f8930eb23fc7cbeaee87d955772ae05eaa6befc87f3a1cc1f105cf" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.749618 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-jcjzx_openstack-operators(efca9205-8a59-45ce-8c50-36b0d0389f12)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" podUID="efca9205-8a59-45ce-8c50-36b0d0389f12" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.763882 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.764582 4813 scope.go:117] "RemoveContainer" containerID="e4042b093b8fb2490684ba66d53230d906e4682f9e60b297770ef5c653c68a70" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.764956 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-76j46_openstack-operators(7921584b-8ce0-45b8-8a56-ab0fdde43582)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" podUID="7921584b-8ce0-45b8-8a56-ab0fdde43582" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.774841 4813 generic.go:334] "Generic (PLEG): container finished" podID="78498723-5c73-4aa4-8480-ef20ce8593ac" containerID="f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8" exitCode=0 Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.774886 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" event={"ID":"78498723-5c73-4aa4-8480-ef20ce8593ac","Type":"ContainerDied","Data":"f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8"} Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.775240 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" event={"ID":"78498723-5c73-4aa4-8480-ef20ce8593ac","Type":"ContainerDied","Data":"e22f617a5ceb793a81c6db1b441b011de6614d327888ab2d8c50d73caa8f76e7"} Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.775281 4813 scope.go:117] "RemoveContainer" containerID="f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.774998 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hdpgf" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.791702 4813 scope.go:117] "RemoveContainer" containerID="23a89a70d897d66967d8b27980922bd55fae76bf3c5409e71c22919ef9f83c19" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.796829 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.808883 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.809566 4813 scope.go:117] "RemoveContainer" containerID="0627009aad30b0ce2e452421ea5038adf0d553e83c703897ef42fc34d1270eb5" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.809789 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_openstack-operators(baf6f7bb-db50-4013-8b77-2b7e4c8101c2)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podUID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.823866 4813 scope.go:117] "RemoveContainer" containerID="f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.824457 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8\": container with ID starting with f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8 not found: ID does not exist" containerID="f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.824618 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8"} err="failed to get container status \"f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8\": rpc error: code = NotFound desc = could not find container \"f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8\": container with ID starting with f1d21f9b405c6f0be60360ae0f540166ed247b53b4cbefa02f13e3b211905ab8 not found: ID does not exist" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.824798 4813 scope.go:117] "RemoveContainer" containerID="23a89a70d897d66967d8b27980922bd55fae76bf3c5409e71c22919ef9f83c19" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.825932 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.833539 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23a89a70d897d66967d8b27980922bd55fae76bf3c5409e71c22919ef9f83c19\": container with ID starting with 23a89a70d897d66967d8b27980922bd55fae76bf3c5409e71c22919ef9f83c19 not found: ID does not exist" containerID="23a89a70d897d66967d8b27980922bd55fae76bf3c5409e71c22919ef9f83c19" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.833589 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23a89a70d897d66967d8b27980922bd55fae76bf3c5409e71c22919ef9f83c19"} err="failed to get container status \"23a89a70d897d66967d8b27980922bd55fae76bf3c5409e71c22919ef9f83c19\": rpc error: code = NotFound desc = could not find container \"23a89a70d897d66967d8b27980922bd55fae76bf3c5409e71c22919ef9f83c19\": container with ID starting with 23a89a70d897d66967d8b27980922bd55fae76bf3c5409e71c22919ef9f83c19 not found: ID does not exist" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.833711 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.833851 4813 scope.go:117] "RemoveContainer" containerID="43d5691a3552e7c2d7e6aa05dd094621e377ecc88488a8e0c5598d77d496a181" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.834476 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c6kw6_openstack-operators(b69526d6-6616-4536-a228-4cdb57e1881c)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.835218 4813 scope.go:117] "RemoveContainer" containerID="992531cc19bfe1ced64390d5b58ded8d348ef8aad2de68f0eb7b8d5f8b4ff0d3" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.835547 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-6j272_openstack-operators(9374bbb0-b458-4c1c-a327-67bcbea83045)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.849088 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hdpgf"] Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.859477 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hdpgf"] Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.878090 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.880983 4813 scope.go:117] "RemoveContainer" containerID="2a277922b1d2931bedbc476b84bfcd968bab53c7c778a49f36382c68a2a67ab7" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.881567 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-gjs27_openstack-operators(a31ffbb8-0255-45d6-9125-6cccc7b444ba)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" podUID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.991081 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:51:14 crc kubenswrapper[4813]: I1125 10:51:14.992239 4813 scope.go:117] "RemoveContainer" containerID="852ade3a02bdc4966cc001cd60b4f66a047664199c14930683be6960cadaac48" Nov 25 10:51:14 crc kubenswrapper[4813]: E1125 10:51:14.992477 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-2d2x7_openstack-operators(9093a664-86f3-4349-bd13-0a5e4aca8036)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" podUID="9093a664-86f3-4349-bd13-0a5e4aca8036" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.029801 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.030523 4813 scope.go:117] "RemoveContainer" containerID="9a9014db12945f5c91d4957251d5c07fad072365298baa2de399c2d1672f60e6" Nov 25 10:51:15 crc kubenswrapper[4813]: E1125 10:51:15.030800 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-fjkzd_openstack-operators(94c3d2b4-f1bb-402d-a39d-78e16bee970b)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" podUID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.060055 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.061101 4813 scope.go:117] "RemoveContainer" containerID="8439f0e4f753871ca0b6d1cd7f0234c50f6500f26d7675f32dbb5d90cee04305" Nov 25 10:51:15 crc kubenswrapper[4813]: E1125 10:51:15.061361 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.176219 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.179080 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.181319 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.182466 4813 scope.go:117] "RemoveContainer" containerID="66ec40f15a48177338b909733dfca944bf8b176edec10b3c22c1f9a4cccae5b5" Nov 25 10:51:15 crc kubenswrapper[4813]: E1125 10:51:15.182899 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tc2mg_openstack-operators(db556642-a360-4559-8cde-7c25d7a893e0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podUID="db556642-a360-4559-8cde-7c25d7a893e0" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.226853 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.227748 4813 scope.go:117] "RemoveContainer" containerID="129fb58dceeec99f79108d79e4141e877c5ccddbc95d57d99165becf55b1745d" Nov 25 10:51:15 crc kubenswrapper[4813]: E1125 10:51:15.228087 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podUID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.251427 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.397936 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.408357 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.509773 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.579337 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.615820 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.632220 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62da3927-ddca-4922-8e9b-c96d06c44c31" path="/var/lib/kubelet/pods/62da3927-ddca-4922-8e9b-c96d06c44c31/volumes" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.632709 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f8a703-848a-4de9-a102-81426dcd6c3a" path="/var/lib/kubelet/pods/69f8a703-848a-4de9-a102-81426dcd6c3a/volumes" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.633167 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" path="/var/lib/kubelet/pods/78498723-5c73-4aa4-8480-ef20ce8593ac/volumes" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.635331 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.786773 4813 generic.go:334] "Generic (PLEG): container finished" podID="aea2efa1-cb45-4657-8ea6-efd7799cb0a4" containerID="0701e0271db215ee376c42bd707d0f093ab2b47d3bb0728f5455ec216732a288" exitCode=0 Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.786835 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aea2efa1-cb45-4657-8ea6-efd7799cb0a4","Type":"ContainerDied","Data":"0701e0271db215ee376c42bd707d0f093ab2b47d3bb0728f5455ec216732a288"} Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.794884 4813 generic.go:334] "Generic (PLEG): container finished" podID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" containerID="f61649b3a3183e5ea485b01b8d52ba5d8649465776528d41e0e5c9bd61db0694" exitCode=0 Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.794986 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf91d2ed-6d43-49b1-8010-1f59f38aea76","Type":"ContainerDied","Data":"f61649b3a3183e5ea485b01b8d52ba5d8649465776528d41e0e5c9bd61db0694"} Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.799880 4813 generic.go:334] "Generic (PLEG): container finished" podID="e9030c35-b810-4f59-b1e6-5daec39fcc6d" containerID="91295643a276fd8e2a13cfaa5b1900a0ab4fc266378e1218439d9294577c93cd" exitCode=1 Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.801015 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e9030c35-b810-4f59-b1e6-5daec39fcc6d","Type":"ContainerDied","Data":"91295643a276fd8e2a13cfaa5b1900a0ab4fc266378e1218439d9294577c93cd"} Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.801066 4813 scope.go:117] "RemoveContainer" containerID="95750d6a122b9d2009ab5f55a56f9dd060d644f88e52fa5d51be75d058873105" Nov 25 10:51:15 crc kubenswrapper[4813]: I1125 10:51:15.801569 4813 scope.go:117] "RemoveContainer" containerID="91295643a276fd8e2a13cfaa5b1900a0ab4fc266378e1218439d9294577c93cd" Nov 25 10:51:15 crc kubenswrapper[4813]: E1125 10:51:15.801907 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(e9030c35-b810-4f59-b1e6-5daec39fcc6d)\"" pod="openstack/kube-state-metrics-0" podUID="e9030c35-b810-4f59-b1e6-5daec39fcc6d" Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.115663 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.465699 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.809132 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aea2efa1-cb45-4657-8ea6-efd7799cb0a4","Type":"ContainerStarted","Data":"913f926750f49dfe77513dbf4232783df214e00f76210764b2934cb0fad38b6b"} Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.809661 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.811752 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf91d2ed-6d43-49b1-8010-1f59f38aea76","Type":"ContainerStarted","Data":"6da1ff7d6fcae58f674efd8d3293596350556c5b00a3e9b7de75cac5015c696e"} Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.811957 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.815402 4813 generic.go:334] "Generic (PLEG): container finished" podID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerID="d485ef91cc0c7661e8de48a1695d2002b56d90f65fe6b821940417d9704ee765" exitCode=0 Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.815458 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerDied","Data":"d485ef91cc0c7661e8de48a1695d2002b56d90f65fe6b821940417d9704ee765"} Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.817008 4813 generic.go:334] "Generic (PLEG): container finished" podID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerID="c665bcb3ca9b8e3b9b3b67396c7636f5856751171315ecdc828b020fe41d11f7" exitCode=0 Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.817039 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerDied","Data":"c665bcb3ca9b8e3b9b3b67396c7636f5856751171315ecdc828b020fe41d11f7"} Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.818728 4813 generic.go:334] "Generic (PLEG): container finished" podID="396645a8-bd9a-429a-8d95-33dcec24c4ba" containerID="822dffefb9c96fe8bc81964af2660bbeeaa2e42c111e9ef90f07aa0ab79f0822" exitCode=1 Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.818836 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" event={"ID":"396645a8-bd9a-429a-8d95-33dcec24c4ba","Type":"ContainerDied","Data":"822dffefb9c96fe8bc81964af2660bbeeaa2e42c111e9ef90f07aa0ab79f0822"} Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.819660 4813 scope.go:117] "RemoveContainer" containerID="822dffefb9c96fe8bc81964af2660bbeeaa2e42c111e9ef90f07aa0ab79f0822" Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.843115 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.532998383 podStartE2EDuration="1m4.843093296s" podCreationTimestamp="2025-11-25 10:50:12 +0000 UTC" firstStartedPulling="2025-11-25 10:50:14.749406138 +0000 UTC m=+1111.879116024" lastFinishedPulling="2025-11-25 10:50:41.059501051 +0000 UTC m=+1138.189210937" observedRunningTime="2025-11-25 10:51:16.83410587 +0000 UTC m=+1173.963815766" watchObservedRunningTime="2025-11-25 10:51:16.843093296 +0000 UTC m=+1173.972803182" Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.876500 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.55556747 podStartE2EDuration="1m3.876478594s" podCreationTimestamp="2025-11-25 10:50:13 +0000 UTC" firstStartedPulling="2025-11-25 10:50:20.024411607 +0000 UTC m=+1117.154121493" lastFinishedPulling="2025-11-25 10:50:41.345322721 +0000 UTC m=+1138.475032617" observedRunningTime="2025-11-25 10:51:16.872321246 +0000 UTC m=+1174.002031152" watchObservedRunningTime="2025-11-25 10:51:16.876478594 +0000 UTC m=+1174.006188480" Nov 25 10:51:16 crc kubenswrapper[4813]: I1125 10:51:16.881256 4813 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.066077 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.621149 4813 scope.go:117] "RemoveContainer" containerID="5e6d99aceef67e79d2faf4c6ce97387949ec0a4024f91f0fec708b9d90c04746" Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.831458 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerStarted","Data":"b93750874c71ca3d1d7d50f1fb30894eba5c684143e2d04b019e83cdce65e424"} Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.833879 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerStarted","Data":"b16a26f36fccfcb8b61af7eaa248906d6db217d5a30d65751b58e03481a6507e"} Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.836597 4813 generic.go:334] "Generic (PLEG): container finished" podID="396645a8-bd9a-429a-8d95-33dcec24c4ba" containerID="76fa17964a3871eb604f477b537ec7e69842e8b3ea7bceb7b3518f1c9a9b6c20" exitCode=1 Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.836646 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" event={"ID":"396645a8-bd9a-429a-8d95-33dcec24c4ba","Type":"ContainerDied","Data":"76fa17964a3871eb604f477b537ec7e69842e8b3ea7bceb7b3518f1c9a9b6c20"} Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.836742 4813 scope.go:117] "RemoveContainer" containerID="822dffefb9c96fe8bc81964af2660bbeeaa2e42c111e9ef90f07aa0ab79f0822" Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.837467 4813 scope.go:117] "RemoveContainer" containerID="76fa17964a3871eb604f477b537ec7e69842e8b3ea7bceb7b3518f1c9a9b6c20" Nov 25 10:51:17 crc kubenswrapper[4813]: E1125 10:51:17.838040 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-7f985d654d-9bjpb_cert-manager(396645a8-bd9a-429a-8d95-33dcec24c4ba)\"" pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" podUID="396645a8-bd9a-429a-8d95-33dcec24c4ba" Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.840654 4813 generic.go:334] "Generic (PLEG): container finished" podID="ee2b9b30-2c9f-4c88-b31b-a20957e03939" containerID="16e43b42c5f957dba2601e0a03858cb3669954b1ce432af6dcbef18f6448b299" exitCode=1 Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.840726 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ds4rg" event={"ID":"ee2b9b30-2c9f-4c88-b31b-a20957e03939","Type":"ContainerDied","Data":"16e43b42c5f957dba2601e0a03858cb3669954b1ce432af6dcbef18f6448b299"} Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.841607 4813 scope.go:117] "RemoveContainer" containerID="16e43b42c5f957dba2601e0a03858cb3669954b1ce432af6dcbef18f6448b299" Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.848341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" event={"ID":"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b","Type":"ContainerStarted","Data":"b1071257cf141e4ed949afbe28f925bacd737907f1dc1d027f282faf5869e5aa"} Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.849107 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.890877 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=61.890845072 podStartE2EDuration="1m1.890845072s" podCreationTimestamp="2025-11-25 10:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:51:17.869497925 +0000 UTC m=+1174.999207811" watchObservedRunningTime="2025-11-25 10:51:17.890845072 +0000 UTC m=+1175.020554958" Nov 25 10:51:17 crc kubenswrapper[4813]: I1125 10:51:17.907802 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=57.474624814 podStartE2EDuration="1m3.907781523s" podCreationTimestamp="2025-11-25 10:50:14 +0000 UTC" firstStartedPulling="2025-11-25 10:50:56.111358149 +0000 UTC m=+1153.241068045" lastFinishedPulling="2025-11-25 10:51:02.544514868 +0000 UTC m=+1159.674224754" observedRunningTime="2025-11-25 10:51:17.906791985 +0000 UTC m=+1175.036501891" watchObservedRunningTime="2025-11-25 10:51:17.907781523 +0000 UTC m=+1175.037491409" Nov 25 10:51:18 crc kubenswrapper[4813]: I1125 10:51:18.621813 4813 scope.go:117] "RemoveContainer" containerID="e28573f59b3128bbce8c504da372cc321d5d011981900766f25a8cfb8347def3" Nov 25 10:51:18 crc kubenswrapper[4813]: E1125 10:51:18.622225 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-qd4tx_openstack-operators(2bf03402-32ec-423d-a6af-657bc0cfeb15)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:51:18 crc kubenswrapper[4813]: I1125 10:51:18.860890 4813 generic.go:334] "Generic (PLEG): container finished" podID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" containerID="b1071257cf141e4ed949afbe28f925bacd737907f1dc1d027f282faf5869e5aa" exitCode=1 Nov 25 10:51:18 crc kubenswrapper[4813]: I1125 10:51:18.860988 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" event={"ID":"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b","Type":"ContainerDied","Data":"b1071257cf141e4ed949afbe28f925bacd737907f1dc1d027f282faf5869e5aa"} Nov 25 10:51:18 crc kubenswrapper[4813]: I1125 10:51:18.861036 4813 scope.go:117] "RemoveContainer" containerID="5e6d99aceef67e79d2faf4c6ce97387949ec0a4024f91f0fec708b9d90c04746" Nov 25 10:51:18 crc kubenswrapper[4813]: I1125 10:51:18.861610 4813 scope.go:117] "RemoveContainer" containerID="b1071257cf141e4ed949afbe28f925bacd737907f1dc1d027f282faf5869e5aa" Nov 25 10:51:18 crc kubenswrapper[4813]: E1125 10:51:18.861897 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=metallb-operator-controller-manager-6b84b955f5-mmrm7_metallb-system(a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b)\"" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" Nov 25 10:51:18 crc kubenswrapper[4813]: I1125 10:51:18.867806 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ds4rg" event={"ID":"ee2b9b30-2c9f-4c88-b31b-a20957e03939","Type":"ContainerStarted","Data":"af0eff9dafbe89b3222e1be5cdfcdf1683ed7849dfc97c5af4bfb691508ff53a"} Nov 25 10:51:19 crc kubenswrapper[4813]: I1125 10:51:19.264530 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 10:51:19 crc kubenswrapper[4813]: I1125 10:51:19.264902 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 10:51:19 crc kubenswrapper[4813]: I1125 10:51:19.265625 4813 scope.go:117] "RemoveContainer" containerID="91295643a276fd8e2a13cfaa5b1900a0ab4fc266378e1218439d9294577c93cd" Nov 25 10:51:19 crc kubenswrapper[4813]: E1125 10:51:19.265941 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(e9030c35-b810-4f59-b1e6-5daec39fcc6d)\"" pod="openstack/kube-state-metrics-0" podUID="e9030c35-b810-4f59-b1e6-5daec39fcc6d" Nov 25 10:51:19 crc kubenswrapper[4813]: I1125 10:51:19.878750 4813 scope.go:117] "RemoveContainer" containerID="b1071257cf141e4ed949afbe28f925bacd737907f1dc1d027f282faf5869e5aa" Nov 25 10:51:19 crc kubenswrapper[4813]: E1125 10:51:19.878963 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=metallb-operator-controller-manager-6b84b955f5-mmrm7_metallb-system(a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b)\"" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" Nov 25 10:51:22 crc kubenswrapper[4813]: I1125 10:51:22.235904 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 10:51:23 crc kubenswrapper[4813]: I1125 10:51:23.164051 4813 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 10:51:23 crc kubenswrapper[4813]: I1125 10:51:23.164311 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://776cb73dc4fa34e0ed6b14e607e94731f8cf2badcb1b600b291cab373353d8a8" gracePeriod=5 Nov 25 10:51:23 crc kubenswrapper[4813]: I1125 10:51:23.521051 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 10:51:23 crc kubenswrapper[4813]: I1125 10:51:23.628077 4813 scope.go:117] "RemoveContainer" containerID="d1e62b445459b34984999bd018b9ecc5cad36cfc97c7cb8b1e67620067d14695" Nov 25 10:51:23 crc kubenswrapper[4813]: E1125 10:51:23.628656 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-5ffc8f797b-hbwwd_openstack-operators(09bd1800-0aaa-4908-ac58-e0890a2a309f)\"" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" podUID="09bd1800-0aaa-4908-ac58-e0890a2a309f" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.160716 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.343798 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.344711 4813 scope.go:117] "RemoveContainer" containerID="16460e4f9c43088098ac12f9e10def54db37c1068c6a044a870425a3f19e77b4" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.344984 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-4wff2_openstack-operators(03c63a63-9a46-4bda-941b-8c5ba81a13fe)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" podUID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.361615 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.362360 4813 scope.go:117] "RemoveContainer" containerID="6cfbef3e5911a335e778a25cc22825312e21d3376c549b161d9302f36e73d1b9" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.362586 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-dvfd9_openstack-operators(a650bdd3-2541-4b76-b5db-64273262bc06)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.380953 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.381766 4813 scope.go:117] "RemoveContainer" containerID="6adae85f90a1da16b445e1a30fe09db98185ce36b6a45741031a9f7f69e1e630" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.382097 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-hjqzd_openstack-operators(aa2934d9-d547-49d0-9d06-232120b44fa1)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" podUID="aa2934d9-d547-49d0-9d06-232120b44fa1" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.415004 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.415602 4813 scope.go:117] "RemoveContainer" containerID="aa2d95b74c8b460ce076d792421db9415d752e61eb487f0fbfdbe47d00194d5b" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.415908 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-547cf68667-6v6dd_openstack-operators(71c5bfc5-a289-4942-bc55-819f06787eb6)\"" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" podUID="71c5bfc5-a289-4942-bc55-819f06787eb6" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.456572 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.457427 4813 scope.go:117] "RemoveContainer" containerID="214e46a8b9a71b8264e51a0cf7e2d11786fdb4b5d0f1d240813790d9bee31895" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.457721 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-f6dvp_openstack-operators(eaf6f1c0-6585-4eba-8baf-942ed2503735)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" podUID="eaf6f1c0-6585-4eba-8baf-942ed2503735" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.471968 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.472632 4813 scope.go:117] "RemoveContainer" containerID="86e726cf9b8333f0660a30b9e6f09b1e7a7dd75a7fe3436c10eff9990aebb19c" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.472899 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-8spkk_openstack-operators(af18e07e-95b3-476f-9604-824c36ae74a5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" podUID="af18e07e-95b3-476f-9604-824c36ae74a5" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.573593 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.575878 4813 scope.go:117] "RemoveContainer" containerID="c79525bf17e1747505b559eb8e125a6012f2aa8ff9aaa37562d972c623d802a0" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.577448 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-blrjt_openstack-operators(d4a62556-e6e8-42dc-b7e4-180c40611393)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" podUID="d4a62556-e6e8-42dc-b7e4-180c40611393" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.585439 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.586912 4813 scope.go:117] "RemoveContainer" containerID="2151bd31d0069b61def43848e29c57b6d08b542f9888b266dabceb722a50f8fa" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.587313 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-858778c9dc-fs9sm_openstack-operators(06c81a1e-0461-4457-85ea-1a4060423eda)\"" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" podUID="06c81a1e-0461-4457-85ea-1a4060423eda" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.748443 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.749925 4813 scope.go:117] "RemoveContainer" containerID="e433512445f8930eb23fc7cbeaee87d955772ae05eaa6befc87f3a1cc1f105cf" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.750281 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-jcjzx_openstack-operators(efca9205-8a59-45ce-8c50-36b0d0389f12)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" podUID="efca9205-8a59-45ce-8c50-36b0d0389f12" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.763891 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.764948 4813 scope.go:117] "RemoveContainer" containerID="e4042b093b8fb2490684ba66d53230d906e4682f9e60b297770ef5c653c68a70" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.765265 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-76j46_openstack-operators(7921584b-8ce0-45b8-8a56-ab0fdde43582)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" podUID="7921584b-8ce0-45b8-8a56-ab0fdde43582" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.809175 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.810311 4813 scope.go:117] "RemoveContainer" containerID="0627009aad30b0ce2e452421ea5038adf0d553e83c703897ef42fc34d1270eb5" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.810602 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_openstack-operators(baf6f7bb-db50-4013-8b77-2b7e4c8101c2)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podUID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.826984 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.829580 4813 scope.go:117] "RemoveContainer" containerID="43d5691a3552e7c2d7e6aa05dd094621e377ecc88488a8e0c5598d77d496a181" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.830018 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c6kw6_openstack-operators(b69526d6-6616-4536-a228-4cdb57e1881c)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.833622 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.834921 4813 scope.go:117] "RemoveContainer" containerID="992531cc19bfe1ced64390d5b58ded8d348ef8aad2de68f0eb7b8d5f8b4ff0d3" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.835275 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-6j272_openstack-operators(9374bbb0-b458-4c1c-a327-67bcbea83045)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.877997 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.878924 4813 scope.go:117] "RemoveContainer" containerID="2a277922b1d2931bedbc476b84bfcd968bab53c7c778a49f36382c68a2a67ab7" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.879263 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-gjs27_openstack-operators(a31ffbb8-0255-45d6-9125-6cccc7b444ba)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" podUID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.991129 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:51:24 crc kubenswrapper[4813]: I1125 10:51:24.992618 4813 scope.go:117] "RemoveContainer" containerID="852ade3a02bdc4966cc001cd60b4f66a047664199c14930683be6960cadaac48" Nov 25 10:51:24 crc kubenswrapper[4813]: E1125 10:51:24.993002 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-2d2x7_openstack-operators(9093a664-86f3-4349-bd13-0a5e4aca8036)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" podUID="9093a664-86f3-4349-bd13-0a5e4aca8036" Nov 25 10:51:25 crc kubenswrapper[4813]: I1125 10:51:25.029162 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:51:25 crc kubenswrapper[4813]: I1125 10:51:25.030260 4813 scope.go:117] "RemoveContainer" containerID="9a9014db12945f5c91d4957251d5c07fad072365298baa2de399c2d1672f60e6" Nov 25 10:51:25 crc kubenswrapper[4813]: E1125 10:51:25.030841 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-fjkzd_openstack-operators(94c3d2b4-f1bb-402d-a39d-78e16bee970b)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" podUID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" Nov 25 10:51:25 crc kubenswrapper[4813]: I1125 10:51:25.060651 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:51:25 crc kubenswrapper[4813]: I1125 10:51:25.061508 4813 scope.go:117] "RemoveContainer" containerID="8439f0e4f753871ca0b6d1cd7f0234c50f6500f26d7675f32dbb5d90cee04305" Nov 25 10:51:25 crc kubenswrapper[4813]: E1125 10:51:25.061933 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:51:25 crc kubenswrapper[4813]: I1125 10:51:25.181295 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:51:25 crc kubenswrapper[4813]: I1125 10:51:25.182315 4813 scope.go:117] "RemoveContainer" containerID="66ec40f15a48177338b909733dfca944bf8b176edec10b3c22c1f9a4cccae5b5" Nov 25 10:51:25 crc kubenswrapper[4813]: E1125 10:51:25.182739 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tc2mg_openstack-operators(db556642-a360-4559-8cde-7c25d7a893e0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podUID="db556642-a360-4559-8cde-7c25d7a893e0" Nov 25 10:51:25 crc kubenswrapper[4813]: I1125 10:51:25.226892 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:51:25 crc kubenswrapper[4813]: I1125 10:51:25.228362 4813 scope.go:117] "RemoveContainer" containerID="129fb58dceeec99f79108d79e4141e877c5ccddbc95d57d99165becf55b1745d" Nov 25 10:51:25 crc kubenswrapper[4813]: E1125 10:51:25.228713 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podUID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" Nov 25 10:51:25 crc kubenswrapper[4813]: I1125 10:51:25.845437 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 10:51:26 crc kubenswrapper[4813]: I1125 10:51:26.018378 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:51:26 crc kubenswrapper[4813]: I1125 10:51:26.018849 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 10:51:27 crc kubenswrapper[4813]: I1125 10:51:27.431273 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:27.466447 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:27.466505 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:27.737847 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:27.795385 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:27.886366 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:27.932494 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:28.341125 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:28.577086 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:28.617051 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:28.621659 4813 scope.go:117] "RemoveContainer" containerID="76fa17964a3871eb604f477b537ec7e69842e8b3ea7bceb7b3518f1c9a9b6c20" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:28.918152 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:28.947104 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:28.947168 4813 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="776cb73dc4fa34e0ed6b14e607e94731f8cf2badcb1b600b291cab373353d8a8" exitCode=137 Nov 25 10:51:28 crc kubenswrapper[4813]: I1125 10:51:28.983591 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.078903 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cdcjj/must-gather-5vk8l"] Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.099830 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.107004 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.420911 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.598728 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 10:51:29 crc kubenswrapper[4813]: E1125 10:51:29.630658 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:51:29 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_must-gather-5vk8l_openshift-must-gather-cdcjj_90d80d33-b519-4d67-97ba-1b8b828e917b_0(57003ff00fb4bafcd6cdd4212036d91118e94fddbdb492a8e3bc34bf4cd99760): error adding pod openshift-must-gather-cdcjj_must-gather-5vk8l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"57003ff00fb4bafcd6cdd4212036d91118e94fddbdb492a8e3bc34bf4cd99760" Netns:"/var/run/netns/1fc8d20f-e9ec-43b9-a04a-ff3651f8f98e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-must-gather-cdcjj;K8S_POD_NAME=must-gather-5vk8l;K8S_POD_INFRA_CONTAINER_ID=57003ff00fb4bafcd6cdd4212036d91118e94fddbdb492a8e3bc34bf4cd99760;K8S_POD_UID=90d80d33-b519-4d67-97ba-1b8b828e917b" Path:"" ERRORED: error configuring pod [openshift-must-gather-cdcjj/must-gather-5vk8l] networking: Multus: [openshift-must-gather-cdcjj/must-gather-5vk8l/90d80d33-b519-4d67-97ba-1b8b828e917b]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod must-gather-5vk8l in out of cluster comm: pod "must-gather-5vk8l" not found Nov 25 10:51:29 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:29 crc kubenswrapper[4813]: > Nov 25 10:51:29 crc kubenswrapper[4813]: E1125 10:51:29.630713 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:51:29 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_must-gather-5vk8l_openshift-must-gather-cdcjj_90d80d33-b519-4d67-97ba-1b8b828e917b_0(57003ff00fb4bafcd6cdd4212036d91118e94fddbdb492a8e3bc34bf4cd99760): error adding pod openshift-must-gather-cdcjj_must-gather-5vk8l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"57003ff00fb4bafcd6cdd4212036d91118e94fddbdb492a8e3bc34bf4cd99760" Netns:"/var/run/netns/1fc8d20f-e9ec-43b9-a04a-ff3651f8f98e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-must-gather-cdcjj;K8S_POD_NAME=must-gather-5vk8l;K8S_POD_INFRA_CONTAINER_ID=57003ff00fb4bafcd6cdd4212036d91118e94fddbdb492a8e3bc34bf4cd99760;K8S_POD_UID=90d80d33-b519-4d67-97ba-1b8b828e917b" Path:"" ERRORED: error configuring pod [openshift-must-gather-cdcjj/must-gather-5vk8l] networking: Multus: [openshift-must-gather-cdcjj/must-gather-5vk8l/90d80d33-b519-4d67-97ba-1b8b828e917b]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod must-gather-5vk8l in out of cluster comm: pod "must-gather-5vk8l" not found Nov 25 10:51:29 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:29 crc kubenswrapper[4813]: > pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:29 crc kubenswrapper[4813]: E1125 10:51:29.630731 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:51:29 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_must-gather-5vk8l_openshift-must-gather-cdcjj_90d80d33-b519-4d67-97ba-1b8b828e917b_0(57003ff00fb4bafcd6cdd4212036d91118e94fddbdb492a8e3bc34bf4cd99760): error adding pod openshift-must-gather-cdcjj_must-gather-5vk8l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"57003ff00fb4bafcd6cdd4212036d91118e94fddbdb492a8e3bc34bf4cd99760" Netns:"/var/run/netns/1fc8d20f-e9ec-43b9-a04a-ff3651f8f98e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-must-gather-cdcjj;K8S_POD_NAME=must-gather-5vk8l;K8S_POD_INFRA_CONTAINER_ID=57003ff00fb4bafcd6cdd4212036d91118e94fddbdb492a8e3bc34bf4cd99760;K8S_POD_UID=90d80d33-b519-4d67-97ba-1b8b828e917b" Path:"" ERRORED: error configuring pod [openshift-must-gather-cdcjj/must-gather-5vk8l] networking: Multus: [openshift-must-gather-cdcjj/must-gather-5vk8l/90d80d33-b519-4d67-97ba-1b8b828e917b]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod must-gather-5vk8l in out of cluster comm: pod "must-gather-5vk8l" not found Nov 25 10:51:29 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:29 crc kubenswrapper[4813]: > pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:29 crc kubenswrapper[4813]: E1125 10:51:29.630789 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"must-gather-5vk8l_openshift-must-gather-cdcjj(90d80d33-b519-4d67-97ba-1b8b828e917b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"must-gather-5vk8l_openshift-must-gather-cdcjj(90d80d33-b519-4d67-97ba-1b8b828e917b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_must-gather-5vk8l_openshift-must-gather-cdcjj_90d80d33-b519-4d67-97ba-1b8b828e917b_0(57003ff00fb4bafcd6cdd4212036d91118e94fddbdb492a8e3bc34bf4cd99760): error adding pod openshift-must-gather-cdcjj_must-gather-5vk8l to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"57003ff00fb4bafcd6cdd4212036d91118e94fddbdb492a8e3bc34bf4cd99760\\\" Netns:\\\"/var/run/netns/1fc8d20f-e9ec-43b9-a04a-ff3651f8f98e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-must-gather-cdcjj;K8S_POD_NAME=must-gather-5vk8l;K8S_POD_INFRA_CONTAINER_ID=57003ff00fb4bafcd6cdd4212036d91118e94fddbdb492a8e3bc34bf4cd99760;K8S_POD_UID=90d80d33-b519-4d67-97ba-1b8b828e917b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-must-gather-cdcjj/must-gather-5vk8l] networking: Multus: [openshift-must-gather-cdcjj/must-gather-5vk8l/90d80d33-b519-4d67-97ba-1b8b828e917b]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod must-gather-5vk8l in out of cluster comm: pod \\\"must-gather-5vk8l\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" podUID="90d80d33-b519-4d67-97ba-1b8b828e917b" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.646854 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.664821 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kzv7f" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.695481 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.695554 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.751650 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.780932 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.843123 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.843239 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.843334 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.843427 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.843469 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.843940 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.843983 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.843968 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.844043 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.881275 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.945035 4813 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.945070 4813 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.945079 4813 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.945112 4813 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.945123 4813 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.961179 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.961328 4813 scope.go:117] "RemoveContainer" containerID="776cb73dc4fa34e0ed6b14e607e94731f8cf2badcb1b600b291cab373353d8a8" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.961378 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.965993 4813 generic.go:334] "Generic (PLEG): container finished" podID="396645a8-bd9a-429a-8d95-33dcec24c4ba" containerID="4ffc2b4595865022305b801310818c2bd583c104890ff2594a5df89a6f821aad" exitCode=1 Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.966072 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" event={"ID":"396645a8-bd9a-429a-8d95-33dcec24c4ba","Type":"ContainerDied","Data":"4ffc2b4595865022305b801310818c2bd583c104890ff2594a5df89a6f821aad"} Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.966280 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.967122 4813 scope.go:117] "RemoveContainer" containerID="4ffc2b4595865022305b801310818c2bd583c104890ff2594a5df89a6f821aad" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.967294 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:29 crc kubenswrapper[4813]: E1125 10:51:29.967551 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-7f985d654d-9bjpb_cert-manager(396645a8-bd9a-429a-8d95-33dcec24c4ba)\"" pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" podUID="396645a8-bd9a-429a-8d95-33dcec24c4ba" Nov 25 10:51:29 crc kubenswrapper[4813]: I1125 10:51:29.997073 4813 scope.go:117] "RemoveContainer" containerID="76fa17964a3871eb604f477b537ec7e69842e8b3ea7bceb7b3518f1c9a9b6c20" Nov 25 10:51:30 crc kubenswrapper[4813]: E1125 10:51:30.028269 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:51:30 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-nb-0_openstack_04683d4b-dec7-42f6-9803-b301f1d449c3_0(bb5251a25100177b2956ca8cdaff114434368c93081015b63135d1cbd9186e16): error adding pod openstack_ovsdbserver-nb-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bb5251a25100177b2956ca8cdaff114434368c93081015b63135d1cbd9186e16" Netns:"/var/run/netns/60265493-7eaa-4272-8b42-384b92723ebf" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-nb-0;K8S_POD_INFRA_CONTAINER_ID=bb5251a25100177b2956ca8cdaff114434368c93081015b63135d1cbd9186e16;K8S_POD_UID=04683d4b-dec7-42f6-9803-b301f1d449c3" Path:"" ERRORED: error configuring pod [openstack/ovsdbserver-nb-0] networking: Multus: [openstack/ovsdbserver-nb-0/04683d4b-dec7-42f6-9803-b301f1d449c3]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-nb-0 in out of cluster comm: pod "ovsdbserver-nb-0" not found Nov 25 10:51:30 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:30 crc kubenswrapper[4813]: > Nov 25 10:51:30 crc kubenswrapper[4813]: E1125 10:51:30.029090 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:51:30 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-nb-0_openstack_04683d4b-dec7-42f6-9803-b301f1d449c3_0(bb5251a25100177b2956ca8cdaff114434368c93081015b63135d1cbd9186e16): error adding pod openstack_ovsdbserver-nb-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bb5251a25100177b2956ca8cdaff114434368c93081015b63135d1cbd9186e16" Netns:"/var/run/netns/60265493-7eaa-4272-8b42-384b92723ebf" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-nb-0;K8S_POD_INFRA_CONTAINER_ID=bb5251a25100177b2956ca8cdaff114434368c93081015b63135d1cbd9186e16;K8S_POD_UID=04683d4b-dec7-42f6-9803-b301f1d449c3" Path:"" ERRORED: error configuring pod [openstack/ovsdbserver-nb-0] networking: Multus: [openstack/ovsdbserver-nb-0/04683d4b-dec7-42f6-9803-b301f1d449c3]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-nb-0 in out of cluster comm: pod "ovsdbserver-nb-0" not found Nov 25 10:51:30 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:30 crc kubenswrapper[4813]: > pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:30 crc kubenswrapper[4813]: E1125 10:51:30.029145 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:51:30 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-nb-0_openstack_04683d4b-dec7-42f6-9803-b301f1d449c3_0(bb5251a25100177b2956ca8cdaff114434368c93081015b63135d1cbd9186e16): error adding pod openstack_ovsdbserver-nb-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bb5251a25100177b2956ca8cdaff114434368c93081015b63135d1cbd9186e16" Netns:"/var/run/netns/60265493-7eaa-4272-8b42-384b92723ebf" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-nb-0;K8S_POD_INFRA_CONTAINER_ID=bb5251a25100177b2956ca8cdaff114434368c93081015b63135d1cbd9186e16;K8S_POD_UID=04683d4b-dec7-42f6-9803-b301f1d449c3" Path:"" ERRORED: error configuring pod [openstack/ovsdbserver-nb-0] networking: Multus: [openstack/ovsdbserver-nb-0/04683d4b-dec7-42f6-9803-b301f1d449c3]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-nb-0 in out of cluster comm: pod "ovsdbserver-nb-0" not found Nov 25 10:51:30 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:30 crc kubenswrapper[4813]: > pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:30 crc kubenswrapper[4813]: E1125 10:51:30.029303 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ovsdbserver-nb-0_openstack(04683d4b-dec7-42f6-9803-b301f1d449c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ovsdbserver-nb-0_openstack(04683d4b-dec7-42f6-9803-b301f1d449c3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-nb-0_openstack_04683d4b-dec7-42f6-9803-b301f1d449c3_0(bb5251a25100177b2956ca8cdaff114434368c93081015b63135d1cbd9186e16): error adding pod openstack_ovsdbserver-nb-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"bb5251a25100177b2956ca8cdaff114434368c93081015b63135d1cbd9186e16\\\" Netns:\\\"/var/run/netns/60265493-7eaa-4272-8b42-384b92723ebf\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-nb-0;K8S_POD_INFRA_CONTAINER_ID=bb5251a25100177b2956ca8cdaff114434368c93081015b63135d1cbd9186e16;K8S_POD_UID=04683d4b-dec7-42f6-9803-b301f1d449c3\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/ovsdbserver-nb-0] networking: Multus: [openstack/ovsdbserver-nb-0/04683d4b-dec7-42f6-9803-b301f1d449c3]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-nb-0 in out of cluster comm: pod \\\"ovsdbserver-nb-0\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/ovsdbserver-nb-0" podUID="04683d4b-dec7-42f6-9803-b301f1d449c3" Nov 25 10:51:30 crc kubenswrapper[4813]: I1125 10:51:30.368453 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-vwbhq" Nov 25 10:51:30 crc kubenswrapper[4813]: I1125 10:51:30.407040 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-z8f8b" Nov 25 10:51:30 crc kubenswrapper[4813]: I1125 10:51:30.488630 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 10:51:30 crc kubenswrapper[4813]: I1125 10:51:30.530892 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 10:51:30 crc kubenswrapper[4813]: E1125 10:51:30.540810 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:51:30 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-sb-0_openstack_7c8efda2-acd3-4ecf-9295-0ad8d037ca94_0(a2872a207e1b89763de7ef1849989a121af4eb84916ef991532a726ed79698d4): error adding pod openstack_ovsdbserver-sb-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2872a207e1b89763de7ef1849989a121af4eb84916ef991532a726ed79698d4" Netns:"/var/run/netns/de7aeae7-8283-4580-9f6d-fbbfb2796d30" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-sb-0;K8S_POD_INFRA_CONTAINER_ID=a2872a207e1b89763de7ef1849989a121af4eb84916ef991532a726ed79698d4;K8S_POD_UID=7c8efda2-acd3-4ecf-9295-0ad8d037ca94" Path:"" ERRORED: error configuring pod [openstack/ovsdbserver-sb-0] networking: Multus: [openstack/ovsdbserver-sb-0/7c8efda2-acd3-4ecf-9295-0ad8d037ca94]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-sb-0 in out of cluster comm: pod "ovsdbserver-sb-0" not found Nov 25 10:51:30 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:30 crc kubenswrapper[4813]: > Nov 25 10:51:30 crc kubenswrapper[4813]: E1125 10:51:30.540959 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:51:30 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-sb-0_openstack_7c8efda2-acd3-4ecf-9295-0ad8d037ca94_0(a2872a207e1b89763de7ef1849989a121af4eb84916ef991532a726ed79698d4): error adding pod openstack_ovsdbserver-sb-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2872a207e1b89763de7ef1849989a121af4eb84916ef991532a726ed79698d4" Netns:"/var/run/netns/de7aeae7-8283-4580-9f6d-fbbfb2796d30" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-sb-0;K8S_POD_INFRA_CONTAINER_ID=a2872a207e1b89763de7ef1849989a121af4eb84916ef991532a726ed79698d4;K8S_POD_UID=7c8efda2-acd3-4ecf-9295-0ad8d037ca94" Path:"" ERRORED: error configuring pod [openstack/ovsdbserver-sb-0] networking: Multus: [openstack/ovsdbserver-sb-0/7c8efda2-acd3-4ecf-9295-0ad8d037ca94]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-sb-0 in out of cluster comm: pod "ovsdbserver-sb-0" not found Nov 25 10:51:30 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:30 crc kubenswrapper[4813]: > pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:30 crc kubenswrapper[4813]: E1125 10:51:30.541000 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:51:30 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-sb-0_openstack_7c8efda2-acd3-4ecf-9295-0ad8d037ca94_0(a2872a207e1b89763de7ef1849989a121af4eb84916ef991532a726ed79698d4): error adding pod openstack_ovsdbserver-sb-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2872a207e1b89763de7ef1849989a121af4eb84916ef991532a726ed79698d4" Netns:"/var/run/netns/de7aeae7-8283-4580-9f6d-fbbfb2796d30" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-sb-0;K8S_POD_INFRA_CONTAINER_ID=a2872a207e1b89763de7ef1849989a121af4eb84916ef991532a726ed79698d4;K8S_POD_UID=7c8efda2-acd3-4ecf-9295-0ad8d037ca94" Path:"" ERRORED: error configuring pod [openstack/ovsdbserver-sb-0] networking: Multus: [openstack/ovsdbserver-sb-0/7c8efda2-acd3-4ecf-9295-0ad8d037ca94]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-sb-0 in out of cluster comm: pod "ovsdbserver-sb-0" not found Nov 25 10:51:30 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:30 crc kubenswrapper[4813]: > pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:30 crc kubenswrapper[4813]: E1125 10:51:30.541102 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ovsdbserver-sb-0_openstack(7c8efda2-acd3-4ecf-9295-0ad8d037ca94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ovsdbserver-sb-0_openstack(7c8efda2-acd3-4ecf-9295-0ad8d037ca94)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-sb-0_openstack_7c8efda2-acd3-4ecf-9295-0ad8d037ca94_0(a2872a207e1b89763de7ef1849989a121af4eb84916ef991532a726ed79698d4): error adding pod openstack_ovsdbserver-sb-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a2872a207e1b89763de7ef1849989a121af4eb84916ef991532a726ed79698d4\\\" Netns:\\\"/var/run/netns/de7aeae7-8283-4580-9f6d-fbbfb2796d30\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-sb-0;K8S_POD_INFRA_CONTAINER_ID=a2872a207e1b89763de7ef1849989a121af4eb84916ef991532a726ed79698d4;K8S_POD_UID=7c8efda2-acd3-4ecf-9295-0ad8d037ca94\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/ovsdbserver-sb-0] networking: Multus: [openstack/ovsdbserver-sb-0/7c8efda2-acd3-4ecf-9295-0ad8d037ca94]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-sb-0 in out of cluster comm: pod \\\"ovsdbserver-sb-0\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/ovsdbserver-sb-0" podUID="7c8efda2-acd3-4ecf-9295-0ad8d037ca94" Nov 25 10:51:30 crc kubenswrapper[4813]: I1125 10:51:30.841331 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 10:51:30 crc kubenswrapper[4813]: I1125 10:51:30.875813 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 10:51:30 crc kubenswrapper[4813]: I1125 10:51:30.928421 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 10:51:30 crc kubenswrapper[4813]: I1125 10:51:30.978176 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:30 crc kubenswrapper[4813]: I1125 10:51:30.978184 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:30 crc kubenswrapper[4813]: I1125 10:51:30.978924 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:30 crc kubenswrapper[4813]: I1125 10:51:30.978948 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:31 crc kubenswrapper[4813]: I1125 10:51:31.018136 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 10:51:31 crc kubenswrapper[4813]: I1125 10:51:31.112931 4813 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 10:51:31 crc kubenswrapper[4813]: I1125 10:51:31.224166 4813 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 10:51:31 crc kubenswrapper[4813]: I1125 10:51:31.541932 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 10:51:31 crc kubenswrapper[4813]: I1125 10:51:31.633286 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 25 10:51:31 crc kubenswrapper[4813]: I1125 10:51:31.633734 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 25 10:51:31 crc kubenswrapper[4813]: I1125 10:51:31.647658 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 10:51:31 crc kubenswrapper[4813]: I1125 10:51:31.647708 4813 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ba2b1222-d64e-49a8-b357-77807a8e987a" Nov 25 10:51:31 crc kubenswrapper[4813]: I1125 10:51:31.655788 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 10:51:31 crc kubenswrapper[4813]: I1125 10:51:31.655839 4813 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ba2b1222-d64e-49a8-b357-77807a8e987a" Nov 25 10:51:32 crc kubenswrapper[4813]: I1125 10:51:32.313558 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 10:51:32 crc kubenswrapper[4813]: I1125 10:51:32.421498 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-9d9wl" Nov 25 10:51:32 crc kubenswrapper[4813]: I1125 10:51:32.577225 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 10:51:32 crc kubenswrapper[4813]: I1125 10:51:32.597156 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 10:51:32 crc kubenswrapper[4813]: I1125 10:51:32.621759 4813 scope.go:117] "RemoveContainer" containerID="e28573f59b3128bbce8c504da372cc321d5d011981900766f25a8cfb8347def3" Nov 25 10:51:32 crc kubenswrapper[4813]: E1125 10:51:32.917955 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:51:32 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_must-gather-5vk8l_openshift-must-gather-cdcjj_90d80d33-b519-4d67-97ba-1b8b828e917b_0(0f4cbc19e62eab6f0e8334654848782388a57ab9ed5f295556a5bdbd5518531e): error adding pod openshift-must-gather-cdcjj_must-gather-5vk8l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0f4cbc19e62eab6f0e8334654848782388a57ab9ed5f295556a5bdbd5518531e" Netns:"/var/run/netns/2c9d2943-9490-4e1e-8f0a-02c9c6509799" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-must-gather-cdcjj;K8S_POD_NAME=must-gather-5vk8l;K8S_POD_INFRA_CONTAINER_ID=0f4cbc19e62eab6f0e8334654848782388a57ab9ed5f295556a5bdbd5518531e;K8S_POD_UID=90d80d33-b519-4d67-97ba-1b8b828e917b" Path:"" ERRORED: error configuring pod [openshift-must-gather-cdcjj/must-gather-5vk8l] networking: Multus: [openshift-must-gather-cdcjj/must-gather-5vk8l/90d80d33-b519-4d67-97ba-1b8b828e917b]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod must-gather-5vk8l in out of cluster comm: pod "must-gather-5vk8l" not found Nov 25 10:51:32 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:32 crc kubenswrapper[4813]: > Nov 25 10:51:32 crc kubenswrapper[4813]: E1125 10:51:32.918049 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:51:32 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_must-gather-5vk8l_openshift-must-gather-cdcjj_90d80d33-b519-4d67-97ba-1b8b828e917b_0(0f4cbc19e62eab6f0e8334654848782388a57ab9ed5f295556a5bdbd5518531e): error adding pod openshift-must-gather-cdcjj_must-gather-5vk8l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0f4cbc19e62eab6f0e8334654848782388a57ab9ed5f295556a5bdbd5518531e" Netns:"/var/run/netns/2c9d2943-9490-4e1e-8f0a-02c9c6509799" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-must-gather-cdcjj;K8S_POD_NAME=must-gather-5vk8l;K8S_POD_INFRA_CONTAINER_ID=0f4cbc19e62eab6f0e8334654848782388a57ab9ed5f295556a5bdbd5518531e;K8S_POD_UID=90d80d33-b519-4d67-97ba-1b8b828e917b" Path:"" ERRORED: error configuring pod [openshift-must-gather-cdcjj/must-gather-5vk8l] networking: Multus: [openshift-must-gather-cdcjj/must-gather-5vk8l/90d80d33-b519-4d67-97ba-1b8b828e917b]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod must-gather-5vk8l in out of cluster comm: pod "must-gather-5vk8l" not found Nov 25 10:51:32 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:32 crc kubenswrapper[4813]: > pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:32 crc kubenswrapper[4813]: E1125 10:51:32.918074 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:51:32 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_must-gather-5vk8l_openshift-must-gather-cdcjj_90d80d33-b519-4d67-97ba-1b8b828e917b_0(0f4cbc19e62eab6f0e8334654848782388a57ab9ed5f295556a5bdbd5518531e): error adding pod openshift-must-gather-cdcjj_must-gather-5vk8l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0f4cbc19e62eab6f0e8334654848782388a57ab9ed5f295556a5bdbd5518531e" Netns:"/var/run/netns/2c9d2943-9490-4e1e-8f0a-02c9c6509799" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-must-gather-cdcjj;K8S_POD_NAME=must-gather-5vk8l;K8S_POD_INFRA_CONTAINER_ID=0f4cbc19e62eab6f0e8334654848782388a57ab9ed5f295556a5bdbd5518531e;K8S_POD_UID=90d80d33-b519-4d67-97ba-1b8b828e917b" Path:"" ERRORED: error configuring pod [openshift-must-gather-cdcjj/must-gather-5vk8l] networking: Multus: [openshift-must-gather-cdcjj/must-gather-5vk8l/90d80d33-b519-4d67-97ba-1b8b828e917b]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod must-gather-5vk8l in out of cluster comm: pod "must-gather-5vk8l" not found Nov 25 10:51:32 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:32 crc kubenswrapper[4813]: > pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:32 crc kubenswrapper[4813]: E1125 10:51:32.918128 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"must-gather-5vk8l_openshift-must-gather-cdcjj(90d80d33-b519-4d67-97ba-1b8b828e917b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"must-gather-5vk8l_openshift-must-gather-cdcjj(90d80d33-b519-4d67-97ba-1b8b828e917b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_must-gather-5vk8l_openshift-must-gather-cdcjj_90d80d33-b519-4d67-97ba-1b8b828e917b_0(0f4cbc19e62eab6f0e8334654848782388a57ab9ed5f295556a5bdbd5518531e): error adding pod openshift-must-gather-cdcjj_must-gather-5vk8l to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"0f4cbc19e62eab6f0e8334654848782388a57ab9ed5f295556a5bdbd5518531e\\\" Netns:\\\"/var/run/netns/2c9d2943-9490-4e1e-8f0a-02c9c6509799\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-must-gather-cdcjj;K8S_POD_NAME=must-gather-5vk8l;K8S_POD_INFRA_CONTAINER_ID=0f4cbc19e62eab6f0e8334654848782388a57ab9ed5f295556a5bdbd5518531e;K8S_POD_UID=90d80d33-b519-4d67-97ba-1b8b828e917b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-must-gather-cdcjj/must-gather-5vk8l] networking: Multus: [openshift-must-gather-cdcjj/must-gather-5vk8l/90d80d33-b519-4d67-97ba-1b8b828e917b]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod must-gather-5vk8l in out of cluster comm: pod \\\"must-gather-5vk8l\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" podUID="90d80d33-b519-4d67-97ba-1b8b828e917b" Nov 25 10:51:33 crc kubenswrapper[4813]: I1125 10:51:33.166789 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 10:51:33 crc kubenswrapper[4813]: I1125 10:51:33.219792 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 10:51:33 crc kubenswrapper[4813]: I1125 10:51:33.250843 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 10:51:33 crc kubenswrapper[4813]: I1125 10:51:33.568836 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 10:51:33 crc kubenswrapper[4813]: I1125 10:51:33.626397 4813 scope.go:117] "RemoveContainer" containerID="91295643a276fd8e2a13cfaa5b1900a0ab4fc266378e1218439d9294577c93cd" Nov 25 10:51:33 crc kubenswrapper[4813]: I1125 10:51:33.626560 4813 scope.go:117] "RemoveContainer" containerID="b1071257cf141e4ed949afbe28f925bacd737907f1dc1d027f282faf5869e5aa" Nov 25 10:51:33 crc kubenswrapper[4813]: E1125 10:51:33.626782 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(e9030c35-b810-4f59-b1e6-5daec39fcc6d)\"" pod="openstack/kube-state-metrics-0" podUID="e9030c35-b810-4f59-b1e6-5daec39fcc6d" Nov 25 10:51:33 crc kubenswrapper[4813]: E1125 10:51:33.626867 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=metallb-operator-controller-manager-6b84b955f5-mmrm7_metallb-system(a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b)\"" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" Nov 25 10:51:33 crc kubenswrapper[4813]: I1125 10:51:33.703390 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xhjqf" Nov 25 10:51:33 crc kubenswrapper[4813]: I1125 10:51:33.880819 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-cctvq" Nov 25 10:51:33 crc kubenswrapper[4813]: I1125 10:51:33.928261 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 10:51:34 crc kubenswrapper[4813]: I1125 10:51:34.005545 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" event={"ID":"2bf03402-32ec-423d-a6af-657bc0cfeb15","Type":"ContainerStarted","Data":"08e7f311e38946acbfb35ae6b1a86c7ad47e62db1724f5a533a0c9ebfbd382a3"} Nov 25 10:51:34 crc kubenswrapper[4813]: I1125 10:51:34.251565 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 10:51:34 crc kubenswrapper[4813]: I1125 10:51:34.330460 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="aea2efa1-cb45-4657-8ea6-efd7799cb0a4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 25 10:51:34 crc kubenswrapper[4813]: E1125 10:51:34.339876 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:51:34 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-sb-0_openstack_7c8efda2-acd3-4ecf-9295-0ad8d037ca94_0(3ee174eb6f4581ee4161b501fd6729db406f03e8cc0448494e8ccfda24b9d268): error adding pod openstack_ovsdbserver-sb-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3ee174eb6f4581ee4161b501fd6729db406f03e8cc0448494e8ccfda24b9d268" Netns:"/var/run/netns/2034252a-c4c7-4c96-8df8-e0bc27eeb8b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-sb-0;K8S_POD_INFRA_CONTAINER_ID=3ee174eb6f4581ee4161b501fd6729db406f03e8cc0448494e8ccfda24b9d268;K8S_POD_UID=7c8efda2-acd3-4ecf-9295-0ad8d037ca94" Path:"" ERRORED: error configuring pod [openstack/ovsdbserver-sb-0] networking: Multus: [openstack/ovsdbserver-sb-0/7c8efda2-acd3-4ecf-9295-0ad8d037ca94]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-sb-0 in out of cluster comm: pod "ovsdbserver-sb-0" not found Nov 25 10:51:34 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:34 crc kubenswrapper[4813]: > Nov 25 10:51:34 crc kubenswrapper[4813]: E1125 10:51:34.340005 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:51:34 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-sb-0_openstack_7c8efda2-acd3-4ecf-9295-0ad8d037ca94_0(3ee174eb6f4581ee4161b501fd6729db406f03e8cc0448494e8ccfda24b9d268): error adding pod openstack_ovsdbserver-sb-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3ee174eb6f4581ee4161b501fd6729db406f03e8cc0448494e8ccfda24b9d268" Netns:"/var/run/netns/2034252a-c4c7-4c96-8df8-e0bc27eeb8b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-sb-0;K8S_POD_INFRA_CONTAINER_ID=3ee174eb6f4581ee4161b501fd6729db406f03e8cc0448494e8ccfda24b9d268;K8S_POD_UID=7c8efda2-acd3-4ecf-9295-0ad8d037ca94" Path:"" ERRORED: error configuring pod [openstack/ovsdbserver-sb-0] networking: Multus: [openstack/ovsdbserver-sb-0/7c8efda2-acd3-4ecf-9295-0ad8d037ca94]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-sb-0 in out of cluster comm: pod "ovsdbserver-sb-0" not found Nov 25 10:51:34 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:34 crc kubenswrapper[4813]: > pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:34 crc kubenswrapper[4813]: E1125 10:51:34.340038 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:51:34 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-sb-0_openstack_7c8efda2-acd3-4ecf-9295-0ad8d037ca94_0(3ee174eb6f4581ee4161b501fd6729db406f03e8cc0448494e8ccfda24b9d268): error adding pod openstack_ovsdbserver-sb-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3ee174eb6f4581ee4161b501fd6729db406f03e8cc0448494e8ccfda24b9d268" Netns:"/var/run/netns/2034252a-c4c7-4c96-8df8-e0bc27eeb8b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-sb-0;K8S_POD_INFRA_CONTAINER_ID=3ee174eb6f4581ee4161b501fd6729db406f03e8cc0448494e8ccfda24b9d268;K8S_POD_UID=7c8efda2-acd3-4ecf-9295-0ad8d037ca94" Path:"" ERRORED: error configuring pod [openstack/ovsdbserver-sb-0] networking: Multus: [openstack/ovsdbserver-sb-0/7c8efda2-acd3-4ecf-9295-0ad8d037ca94]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-sb-0 in out of cluster comm: pod "ovsdbserver-sb-0" not found Nov 25 10:51:34 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:34 crc kubenswrapper[4813]: > pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:34 crc kubenswrapper[4813]: E1125 10:51:34.340096 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ovsdbserver-sb-0_openstack(7c8efda2-acd3-4ecf-9295-0ad8d037ca94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ovsdbserver-sb-0_openstack(7c8efda2-acd3-4ecf-9295-0ad8d037ca94)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-sb-0_openstack_7c8efda2-acd3-4ecf-9295-0ad8d037ca94_0(3ee174eb6f4581ee4161b501fd6729db406f03e8cc0448494e8ccfda24b9d268): error adding pod openstack_ovsdbserver-sb-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"3ee174eb6f4581ee4161b501fd6729db406f03e8cc0448494e8ccfda24b9d268\\\" Netns:\\\"/var/run/netns/2034252a-c4c7-4c96-8df8-e0bc27eeb8b9\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-sb-0;K8S_POD_INFRA_CONTAINER_ID=3ee174eb6f4581ee4161b501fd6729db406f03e8cc0448494e8ccfda24b9d268;K8S_POD_UID=7c8efda2-acd3-4ecf-9295-0ad8d037ca94\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/ovsdbserver-sb-0] networking: Multus: [openstack/ovsdbserver-sb-0/7c8efda2-acd3-4ecf-9295-0ad8d037ca94]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-sb-0 in out of cluster comm: pod \\\"ovsdbserver-sb-0\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/ovsdbserver-sb-0" podUID="7c8efda2-acd3-4ecf-9295-0ad8d037ca94" Nov 25 10:51:34 crc kubenswrapper[4813]: I1125 10:51:34.597902 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:51:34 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:51:34 crc kubenswrapper[4813]: > Nov 25 10:51:34 crc kubenswrapper[4813]: I1125 10:51:34.606851 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 25 10:51:35 crc kubenswrapper[4813]: E1125 10:51:35.194328 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 10:51:35 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-nb-0_openstack_04683d4b-dec7-42f6-9803-b301f1d449c3_0(31296c9e6305d07092f8d83353774e807b2dcb54632e04d65c9bd08c33ab216b): error adding pod openstack_ovsdbserver-nb-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"31296c9e6305d07092f8d83353774e807b2dcb54632e04d65c9bd08c33ab216b" Netns:"/var/run/netns/f1ce0c78-e0b7-4d5b-8d46-7140878bb9e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-nb-0;K8S_POD_INFRA_CONTAINER_ID=31296c9e6305d07092f8d83353774e807b2dcb54632e04d65c9bd08c33ab216b;K8S_POD_UID=04683d4b-dec7-42f6-9803-b301f1d449c3" Path:"" ERRORED: error configuring pod [openstack/ovsdbserver-nb-0] networking: Multus: [openstack/ovsdbserver-nb-0/04683d4b-dec7-42f6-9803-b301f1d449c3]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-nb-0 in out of cluster comm: pod "ovsdbserver-nb-0" not found Nov 25 10:51:35 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:35 crc kubenswrapper[4813]: > Nov 25 10:51:35 crc kubenswrapper[4813]: E1125 10:51:35.194813 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 10:51:35 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-nb-0_openstack_04683d4b-dec7-42f6-9803-b301f1d449c3_0(31296c9e6305d07092f8d83353774e807b2dcb54632e04d65c9bd08c33ab216b): error adding pod openstack_ovsdbserver-nb-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"31296c9e6305d07092f8d83353774e807b2dcb54632e04d65c9bd08c33ab216b" Netns:"/var/run/netns/f1ce0c78-e0b7-4d5b-8d46-7140878bb9e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-nb-0;K8S_POD_INFRA_CONTAINER_ID=31296c9e6305d07092f8d83353774e807b2dcb54632e04d65c9bd08c33ab216b;K8S_POD_UID=04683d4b-dec7-42f6-9803-b301f1d449c3" Path:"" ERRORED: error configuring pod [openstack/ovsdbserver-nb-0] networking: Multus: [openstack/ovsdbserver-nb-0/04683d4b-dec7-42f6-9803-b301f1d449c3]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-nb-0 in out of cluster comm: pod "ovsdbserver-nb-0" not found Nov 25 10:51:35 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:35 crc kubenswrapper[4813]: > pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:35 crc kubenswrapper[4813]: E1125 10:51:35.194848 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 10:51:35 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-nb-0_openstack_04683d4b-dec7-42f6-9803-b301f1d449c3_0(31296c9e6305d07092f8d83353774e807b2dcb54632e04d65c9bd08c33ab216b): error adding pod openstack_ovsdbserver-nb-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"31296c9e6305d07092f8d83353774e807b2dcb54632e04d65c9bd08c33ab216b" Netns:"/var/run/netns/f1ce0c78-e0b7-4d5b-8d46-7140878bb9e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-nb-0;K8S_POD_INFRA_CONTAINER_ID=31296c9e6305d07092f8d83353774e807b2dcb54632e04d65c9bd08c33ab216b;K8S_POD_UID=04683d4b-dec7-42f6-9803-b301f1d449c3" Path:"" ERRORED: error configuring pod [openstack/ovsdbserver-nb-0] networking: Multus: [openstack/ovsdbserver-nb-0/04683d4b-dec7-42f6-9803-b301f1d449c3]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-nb-0 in out of cluster comm: pod "ovsdbserver-nb-0" not found Nov 25 10:51:35 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 10:51:35 crc kubenswrapper[4813]: > pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:35 crc kubenswrapper[4813]: E1125 10:51:35.194950 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ovsdbserver-nb-0_openstack(04683d4b-dec7-42f6-9803-b301f1d449c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ovsdbserver-nb-0_openstack(04683d4b-dec7-42f6-9803-b301f1d449c3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ovsdbserver-nb-0_openstack_04683d4b-dec7-42f6-9803-b301f1d449c3_0(31296c9e6305d07092f8d83353774e807b2dcb54632e04d65c9bd08c33ab216b): error adding pod openstack_ovsdbserver-nb-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"31296c9e6305d07092f8d83353774e807b2dcb54632e04d65c9bd08c33ab216b\\\" Netns:\\\"/var/run/netns/f1ce0c78-e0b7-4d5b-8d46-7140878bb9e2\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=ovsdbserver-nb-0;K8S_POD_INFRA_CONTAINER_ID=31296c9e6305d07092f8d83353774e807b2dcb54632e04d65c9bd08c33ab216b;K8S_POD_UID=04683d4b-dec7-42f6-9803-b301f1d449c3\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/ovsdbserver-nb-0] networking: Multus: [openstack/ovsdbserver-nb-0/04683d4b-dec7-42f6-9803-b301f1d449c3]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod ovsdbserver-nb-0 in out of cluster comm: pod \\\"ovsdbserver-nb-0\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/ovsdbserver-nb-0" podUID="04683d4b-dec7-42f6-9803-b301f1d449c3" Nov 25 10:51:35 crc kubenswrapper[4813]: I1125 10:51:35.248596 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 10:51:35 crc kubenswrapper[4813]: I1125 10:51:35.277911 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 10:51:35 crc kubenswrapper[4813]: I1125 10:51:35.295026 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 10:51:35 crc kubenswrapper[4813]: I1125 10:51:35.379721 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 10:51:35 crc kubenswrapper[4813]: I1125 10:51:35.592404 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 10:51:35 crc kubenswrapper[4813]: I1125 10:51:35.621363 4813 scope.go:117] "RemoveContainer" containerID="6cfbef3e5911a335e778a25cc22825312e21d3376c549b161d9302f36e73d1b9" Nov 25 10:51:35 crc kubenswrapper[4813]: I1125 10:51:35.621522 4813 scope.go:117] "RemoveContainer" containerID="16460e4f9c43088098ac12f9e10def54db37c1068c6a044a870425a3f19e77b4" Nov 25 10:51:35 crc kubenswrapper[4813]: I1125 10:51:35.621770 4813 scope.go:117] "RemoveContainer" containerID="86e726cf9b8333f0660a30b9e6f09b1e7a7dd75a7fe3436c10eff9990aebb19c" Nov 25 10:51:35 crc kubenswrapper[4813]: I1125 10:51:35.621872 4813 scope.go:117] "RemoveContainer" containerID="e433512445f8930eb23fc7cbeaee87d955772ae05eaa6befc87f3a1cc1f105cf" Nov 25 10:51:35 crc kubenswrapper[4813]: I1125 10:51:35.758869 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 10:51:35 crc kubenswrapper[4813]: I1125 10:51:35.996629 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.005737 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.028665 4813 generic.go:334] "Generic (PLEG): container finished" podID="2bf03402-32ec-423d-a6af-657bc0cfeb15" containerID="08e7f311e38946acbfb35ae6b1a86c7ad47e62db1724f5a533a0c9ebfbd382a3" exitCode=1 Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.028744 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" event={"ID":"2bf03402-32ec-423d-a6af-657bc0cfeb15","Type":"ContainerDied","Data":"08e7f311e38946acbfb35ae6b1a86c7ad47e62db1724f5a533a0c9ebfbd382a3"} Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.028785 4813 scope.go:117] "RemoveContainer" containerID="e28573f59b3128bbce8c504da372cc321d5d011981900766f25a8cfb8347def3" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.029346 4813 scope.go:117] "RemoveContainer" containerID="08e7f311e38946acbfb35ae6b1a86c7ad47e62db1724f5a533a0c9ebfbd382a3" Nov 25 10:51:36 crc kubenswrapper[4813]: E1125 10:51:36.029566 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-qd4tx_openstack-operators(2bf03402-32ec-423d-a6af-657bc0cfeb15)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.032937 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.098455 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-g4q7s" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.302856 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.323425 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-p9x2p" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.346104 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-n8qg8" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.381971 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.386172 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rf7b9" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.547406 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.622342 4813 scope.go:117] "RemoveContainer" containerID="992531cc19bfe1ced64390d5b58ded8d348ef8aad2de68f0eb7b8d5f8b4ff0d3" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.622883 4813 scope.go:117] "RemoveContainer" containerID="66ec40f15a48177338b909733dfca944bf8b176edec10b3c22c1f9a4cccae5b5" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.700154 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.725027 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 10:51:36 crc kubenswrapper[4813]: I1125 10:51:36.931061 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.038894 4813 generic.go:334] "Generic (PLEG): container finished" podID="efca9205-8a59-45ce-8c50-36b0d0389f12" containerID="30c767accdc9d5805bb70bfa2132237ce239b924316a2f4a373fc18a12755362" exitCode=1 Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.038989 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" event={"ID":"efca9205-8a59-45ce-8c50-36b0d0389f12","Type":"ContainerDied","Data":"30c767accdc9d5805bb70bfa2132237ce239b924316a2f4a373fc18a12755362"} Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.039059 4813 scope.go:117] "RemoveContainer" containerID="e433512445f8930eb23fc7cbeaee87d955772ae05eaa6befc87f3a1cc1f105cf" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.039950 4813 scope.go:117] "RemoveContainer" containerID="30c767accdc9d5805bb70bfa2132237ce239b924316a2f4a373fc18a12755362" Nov 25 10:51:37 crc kubenswrapper[4813]: E1125 10:51:37.040266 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-jcjzx_openstack-operators(efca9205-8a59-45ce-8c50-36b0d0389f12)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" podUID="efca9205-8a59-45ce-8c50-36b0d0389f12" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.042145 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" event={"ID":"a650bdd3-2541-4b76-b5db-64273262bc06","Type":"ContainerDied","Data":"e558d240a6bd77705f05b2797b29ea6fd8c416ff6fd3f1b978e06358afb10f7b"} Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.042053 4813 generic.go:334] "Generic (PLEG): container finished" podID="a650bdd3-2541-4b76-b5db-64273262bc06" containerID="e558d240a6bd77705f05b2797b29ea6fd8c416ff6fd3f1b978e06358afb10f7b" exitCode=1 Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.042647 4813 scope.go:117] "RemoveContainer" containerID="e558d240a6bd77705f05b2797b29ea6fd8c416ff6fd3f1b978e06358afb10f7b" Nov 25 10:51:37 crc kubenswrapper[4813]: E1125 10:51:37.042922 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-dvfd9_openstack-operators(a650bdd3-2541-4b76-b5db-64273262bc06)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.044236 4813 scope.go:117] "RemoveContainer" containerID="08e7f311e38946acbfb35ae6b1a86c7ad47e62db1724f5a533a0c9ebfbd382a3" Nov 25 10:51:37 crc kubenswrapper[4813]: E1125 10:51:37.044448 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-qd4tx_openstack-operators(2bf03402-32ec-423d-a6af-657bc0cfeb15)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.050199 4813 generic.go:334] "Generic (PLEG): container finished" podID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" containerID="5a7ab610a3c323904b49fb346bfb5bfd21fe5707ab51c0de4176662641459056" exitCode=1 Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.050313 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" event={"ID":"03c63a63-9a46-4bda-941b-8c5ba81a13fe","Type":"ContainerDied","Data":"5a7ab610a3c323904b49fb346bfb5bfd21fe5707ab51c0de4176662641459056"} Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.051097 4813 scope.go:117] "RemoveContainer" containerID="5a7ab610a3c323904b49fb346bfb5bfd21fe5707ab51c0de4176662641459056" Nov 25 10:51:37 crc kubenswrapper[4813]: E1125 10:51:37.051503 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-4wff2_openstack-operators(03c63a63-9a46-4bda-941b-8c5ba81a13fe)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" podUID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.055146 4813 generic.go:334] "Generic (PLEG): container finished" podID="af18e07e-95b3-476f-9604-824c36ae74a5" containerID="bfd8fcbed80dd21da2cdcede7a0a9ad1efdc3d7bca2b44668e148ea6a5fdde0f" exitCode=1 Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.055194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" event={"ID":"af18e07e-95b3-476f-9604-824c36ae74a5","Type":"ContainerDied","Data":"bfd8fcbed80dd21da2cdcede7a0a9ad1efdc3d7bca2b44668e148ea6a5fdde0f"} Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.057328 4813 scope.go:117] "RemoveContainer" containerID="bfd8fcbed80dd21da2cdcede7a0a9ad1efdc3d7bca2b44668e148ea6a5fdde0f" Nov 25 10:51:37 crc kubenswrapper[4813]: E1125 10:51:37.059237 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-8spkk_openstack-operators(af18e07e-95b3-476f-9604-824c36ae74a5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" podUID="af18e07e-95b3-476f-9604-824c36ae74a5" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.124274 4813 scope.go:117] "RemoveContainer" containerID="6cfbef3e5911a335e778a25cc22825312e21d3376c549b161d9302f36e73d1b9" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.240545 4813 scope.go:117] "RemoveContainer" containerID="16460e4f9c43088098ac12f9e10def54db37c1068c6a044a870425a3f19e77b4" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.304986 4813 scope.go:117] "RemoveContainer" containerID="86e726cf9b8333f0660a30b9e6f09b1e7a7dd75a7fe3436c10eff9990aebb19c" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.571012 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.584422 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.622036 4813 scope.go:117] "RemoveContainer" containerID="9a9014db12945f5c91d4957251d5c07fad072365298baa2de399c2d1672f60e6" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.622245 4813 scope.go:117] "RemoveContainer" containerID="8439f0e4f753871ca0b6d1cd7f0234c50f6500f26d7675f32dbb5d90cee04305" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.622404 4813 scope.go:117] "RemoveContainer" containerID="c79525bf17e1747505b559eb8e125a6012f2aa8ff9aaa37562d972c623d802a0" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.622482 4813 scope.go:117] "RemoveContainer" containerID="43d5691a3552e7c2d7e6aa05dd094621e377ecc88488a8e0c5598d77d496a181" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.622947 4813 scope.go:117] "RemoveContainer" containerID="d1e62b445459b34984999bd018b9ecc5cad36cfc97c7cb8b1e67620067d14695" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.623005 4813 scope.go:117] "RemoveContainer" containerID="852ade3a02bdc4966cc001cd60b4f66a047664199c14930683be6960cadaac48" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.646605 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 10:51:37 crc kubenswrapper[4813]: I1125 10:51:37.858066 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.066372 4813 generic.go:334] "Generic (PLEG): container finished" podID="db556642-a360-4559-8cde-7c25d7a893e0" containerID="d43aa5619836b50806c3c6f8793ae57891628615f721eb1b1d852cb274d9a62e" exitCode=1 Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.066469 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" event={"ID":"db556642-a360-4559-8cde-7c25d7a893e0","Type":"ContainerDied","Data":"d43aa5619836b50806c3c6f8793ae57891628615f721eb1b1d852cb274d9a62e"} Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.066544 4813 scope.go:117] "RemoveContainer" containerID="66ec40f15a48177338b909733dfca944bf8b176edec10b3c22c1f9a4cccae5b5" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.067234 4813 scope.go:117] "RemoveContainer" containerID="d43aa5619836b50806c3c6f8793ae57891628615f721eb1b1d852cb274d9a62e" Nov 25 10:51:38 crc kubenswrapper[4813]: E1125 10:51:38.067569 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tc2mg_openstack-operators(db556642-a360-4559-8cde-7c25d7a893e0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podUID="db556642-a360-4559-8cde-7c25d7a893e0" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.111027 4813 generic.go:334] "Generic (PLEG): container finished" podID="9374bbb0-b458-4c1c-a327-67bcbea83045" containerID="565871b19add60bb7d552c69efb73f85ee79a17dba2c06f7e68a57258c2ffb91" exitCode=1 Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.111068 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" event={"ID":"9374bbb0-b458-4c1c-a327-67bcbea83045","Type":"ContainerDied","Data":"565871b19add60bb7d552c69efb73f85ee79a17dba2c06f7e68a57258c2ffb91"} Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.111660 4813 scope.go:117] "RemoveContainer" containerID="565871b19add60bb7d552c69efb73f85ee79a17dba2c06f7e68a57258c2ffb91" Nov 25 10:51:38 crc kubenswrapper[4813]: E1125 10:51:38.111898 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-6j272_openstack-operators(9374bbb0-b458-4c1c-a327-67bcbea83045)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.141212 4813 scope.go:117] "RemoveContainer" containerID="992531cc19bfe1ced64390d5b58ded8d348ef8aad2de68f0eb7b8d5f8b4ff0d3" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.337372 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-q84kt" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.369414 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.621759 4813 scope.go:117] "RemoveContainer" containerID="e4042b093b8fb2490684ba66d53230d906e4682f9e60b297770ef5c653c68a70" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.621904 4813 scope.go:117] "RemoveContainer" containerID="129fb58dceeec99f79108d79e4141e877c5ccddbc95d57d99165becf55b1745d" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.622020 4813 scope.go:117] "RemoveContainer" containerID="0627009aad30b0ce2e452421ea5038adf0d553e83c703897ef42fc34d1270eb5" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.622147 4813 scope.go:117] "RemoveContainer" containerID="6adae85f90a1da16b445e1a30fe09db98185ce36b6a45741031a9f7f69e1e630" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.622814 4813 scope.go:117] "RemoveContainer" containerID="2a277922b1d2931bedbc476b84bfcd968bab53c7c778a49f36382c68a2a67ab7" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.843139 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.843582 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.852452 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-cm597" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.924649 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 10:51:38 crc kubenswrapper[4813]: I1125 10:51:38.981288 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.121465 4813 generic.go:334] "Generic (PLEG): container finished" podID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" containerID="151f57154401a3ebcad0931e8f36a6408b85b56586e1d9593e65d6e3084ddc72" exitCode=1 Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.121533 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" event={"ID":"baf6f7bb-db50-4013-8b77-2b7e4c8101c2","Type":"ContainerDied","Data":"151f57154401a3ebcad0931e8f36a6408b85b56586e1d9593e65d6e3084ddc72"} Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.121571 4813 scope.go:117] "RemoveContainer" containerID="0627009aad30b0ce2e452421ea5038adf0d553e83c703897ef42fc34d1270eb5" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.122227 4813 scope.go:117] "RemoveContainer" containerID="151f57154401a3ebcad0931e8f36a6408b85b56586e1d9593e65d6e3084ddc72" Nov 25 10:51:39 crc kubenswrapper[4813]: E1125 10:51:39.122603 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_openstack-operators(baf6f7bb-db50-4013-8b77-2b7e4c8101c2)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podUID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.124408 4813 generic.go:334] "Generic (PLEG): container finished" podID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" containerID="f3d988bd30ecb6ef6616939c2676db805195343ec40184587311abe7a65d0fbb" exitCode=1 Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.124475 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" event={"ID":"94c3d2b4-f1bb-402d-a39d-78e16bee970b","Type":"ContainerDied","Data":"f3d988bd30ecb6ef6616939c2676db805195343ec40184587311abe7a65d0fbb"} Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.124980 4813 scope.go:117] "RemoveContainer" containerID="f3d988bd30ecb6ef6616939c2676db805195343ec40184587311abe7a65d0fbb" Nov 25 10:51:39 crc kubenswrapper[4813]: E1125 10:51:39.125208 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-fjkzd_openstack-operators(94c3d2b4-f1bb-402d-a39d-78e16bee970b)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" podUID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.127657 4813 generic.go:334] "Generic (PLEG): container finished" podID="7921584b-8ce0-45b8-8a56-ab0fdde43582" containerID="e573c49981c6852d396a8509942b6e6ccf6672cb0c8326cf0eb3d5e1a9e8c845" exitCode=1 Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.127730 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" event={"ID":"7921584b-8ce0-45b8-8a56-ab0fdde43582","Type":"ContainerDied","Data":"e573c49981c6852d396a8509942b6e6ccf6672cb0c8326cf0eb3d5e1a9e8c845"} Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.128132 4813 scope.go:117] "RemoveContainer" containerID="e573c49981c6852d396a8509942b6e6ccf6672cb0c8326cf0eb3d5e1a9e8c845" Nov 25 10:51:39 crc kubenswrapper[4813]: E1125 10:51:39.128365 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-76j46_openstack-operators(7921584b-8ce0-45b8-8a56-ab0fdde43582)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" podUID="7921584b-8ce0-45b8-8a56-ab0fdde43582" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.131266 4813 generic.go:334] "Generic (PLEG): container finished" podID="09bd1800-0aaa-4908-ac58-e0890a2a309f" containerID="bd462bbd41de67f216310e3db3aecff932f3fa06f9964903533c0cb109c5d29a" exitCode=1 Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.131340 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" event={"ID":"09bd1800-0aaa-4908-ac58-e0890a2a309f","Type":"ContainerDied","Data":"bd462bbd41de67f216310e3db3aecff932f3fa06f9964903533c0cb109c5d29a"} Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.131993 4813 scope.go:117] "RemoveContainer" containerID="bd462bbd41de67f216310e3db3aecff932f3fa06f9964903533c0cb109c5d29a" Nov 25 10:51:39 crc kubenswrapper[4813]: E1125 10:51:39.132239 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=openstack-operator-controller-manager-5ffc8f797b-hbwwd_openstack-operators(09bd1800-0aaa-4908-ac58-e0890a2a309f)\"" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" podUID="09bd1800-0aaa-4908-ac58-e0890a2a309f" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.138293 4813 generic.go:334] "Generic (PLEG): container finished" podID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" containerID="f6facba807807738a369dcafd72b8d71129ebba3f276630025dc6bc0ad7ff9f2" exitCode=1 Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.138500 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" event={"ID":"48ea1018-a88f-4ef0-a82f-7e3b012522ec","Type":"ContainerDied","Data":"f6facba807807738a369dcafd72b8d71129ebba3f276630025dc6bc0ad7ff9f2"} Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.140076 4813 scope.go:117] "RemoveContainer" containerID="f6facba807807738a369dcafd72b8d71129ebba3f276630025dc6bc0ad7ff9f2" Nov 25 10:51:39 crc kubenswrapper[4813]: E1125 10:51:39.140499 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podUID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.143096 4813 generic.go:334] "Generic (PLEG): container finished" podID="9093a664-86f3-4349-bd13-0a5e4aca8036" containerID="c6e8f4156c728ea163af25ae8d442259c066569139df204a7fa159b0e158d28e" exitCode=1 Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.143156 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" event={"ID":"9093a664-86f3-4349-bd13-0a5e4aca8036","Type":"ContainerDied","Data":"c6e8f4156c728ea163af25ae8d442259c066569139df204a7fa159b0e158d28e"} Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.143820 4813 scope.go:117] "RemoveContainer" containerID="c6e8f4156c728ea163af25ae8d442259c066569139df204a7fa159b0e158d28e" Nov 25 10:51:39 crc kubenswrapper[4813]: E1125 10:51:39.144160 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-2d2x7_openstack-operators(9093a664-86f3-4349-bd13-0a5e4aca8036)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" podUID="9093a664-86f3-4349-bd13-0a5e4aca8036" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.147300 4813 generic.go:334] "Generic (PLEG): container finished" podID="b69526d6-6616-4536-a228-4cdb57e1881c" containerID="3ee7e4d2a3463162d8318668cfd23d563d1861d1d089d820f02a7de59930eb4c" exitCode=1 Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.147455 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" event={"ID":"b69526d6-6616-4536-a228-4cdb57e1881c","Type":"ContainerDied","Data":"3ee7e4d2a3463162d8318668cfd23d563d1861d1d089d820f02a7de59930eb4c"} Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.148424 4813 scope.go:117] "RemoveContainer" containerID="3ee7e4d2a3463162d8318668cfd23d563d1861d1d089d820f02a7de59930eb4c" Nov 25 10:51:39 crc kubenswrapper[4813]: E1125 10:51:39.149523 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c6kw6_openstack-operators(b69526d6-6616-4536-a228-4cdb57e1881c)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.154136 4813 generic.go:334] "Generic (PLEG): container finished" podID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" containerID="bd668a2879e65c02ee79f8f9b65f7f57384a493033ef21531f79b7713fe13d84" exitCode=1 Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.154204 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" event={"ID":"a31ffbb8-0255-45d6-9125-6cccc7b444ba","Type":"ContainerDied","Data":"bd668a2879e65c02ee79f8f9b65f7f57384a493033ef21531f79b7713fe13d84"} Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.160347 4813 scope.go:117] "RemoveContainer" containerID="bd668a2879e65c02ee79f8f9b65f7f57384a493033ef21531f79b7713fe13d84" Nov 25 10:51:39 crc kubenswrapper[4813]: E1125 10:51:39.161754 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-gjs27_openstack-operators(a31ffbb8-0255-45d6-9125-6cccc7b444ba)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" podUID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.168279 4813 generic.go:334] "Generic (PLEG): container finished" podID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" containerID="225d0710504e73b1c1e6fcdd9b093a28e2a5b60c4aff05ee71678b87909a28d3" exitCode=1 Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.168349 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" event={"ID":"5f9254c7-c8dc-4504-bdf5-264c78e03b0c","Type":"ContainerDied","Data":"225d0710504e73b1c1e6fcdd9b093a28e2a5b60c4aff05ee71678b87909a28d3"} Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.168990 4813 scope.go:117] "RemoveContainer" containerID="225d0710504e73b1c1e6fcdd9b093a28e2a5b60c4aff05ee71678b87909a28d3" Nov 25 10:51:39 crc kubenswrapper[4813]: E1125 10:51:39.169201 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.172776 4813 generic.go:334] "Generic (PLEG): container finished" podID="d4a62556-e6e8-42dc-b7e4-180c40611393" containerID="780d87d09c967309daa59ca92087c8228e2f4e65f95c031f844934a27f83e390" exitCode=1 Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.172827 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" event={"ID":"d4a62556-e6e8-42dc-b7e4-180c40611393","Type":"ContainerDied","Data":"780d87d09c967309daa59ca92087c8228e2f4e65f95c031f844934a27f83e390"} Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.173284 4813 scope.go:117] "RemoveContainer" containerID="780d87d09c967309daa59ca92087c8228e2f4e65f95c031f844934a27f83e390" Nov 25 10:51:39 crc kubenswrapper[4813]: E1125 10:51:39.173493 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-blrjt_openstack-operators(d4a62556-e6e8-42dc-b7e4-180c40611393)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" podUID="d4a62556-e6e8-42dc-b7e4-180c40611393" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.177776 4813 generic.go:334] "Generic (PLEG): container finished" podID="aa2934d9-d547-49d0-9d06-232120b44fa1" containerID="7c6c417432731434aa1b2265b04ad2e7e1e9105be30859d914fe45bf0a6c9adb" exitCode=1 Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.177856 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" event={"ID":"aa2934d9-d547-49d0-9d06-232120b44fa1","Type":"ContainerDied","Data":"7c6c417432731434aa1b2265b04ad2e7e1e9105be30859d914fe45bf0a6c9adb"} Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.178341 4813 scope.go:117] "RemoveContainer" containerID="7c6c417432731434aa1b2265b04ad2e7e1e9105be30859d914fe45bf0a6c9adb" Nov 25 10:51:39 crc kubenswrapper[4813]: E1125 10:51:39.178574 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-hjqzd_openstack-operators(aa2934d9-d547-49d0-9d06-232120b44fa1)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" podUID="aa2934d9-d547-49d0-9d06-232120b44fa1" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.182176 4813 scope.go:117] "RemoveContainer" containerID="9a9014db12945f5c91d4957251d5c07fad072365298baa2de399c2d1672f60e6" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.239743 4813 scope.go:117] "RemoveContainer" containerID="e4042b093b8fb2490684ba66d53230d906e4682f9e60b297770ef5c653c68a70" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.296079 4813 scope.go:117] "RemoveContainer" containerID="d1e62b445459b34984999bd018b9ecc5cad36cfc97c7cb8b1e67620067d14695" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.320007 4813 scope.go:117] "RemoveContainer" containerID="129fb58dceeec99f79108d79e4141e877c5ccddbc95d57d99165becf55b1745d" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.353152 4813 scope.go:117] "RemoveContainer" containerID="852ade3a02bdc4966cc001cd60b4f66a047664199c14930683be6960cadaac48" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.378140 4813 scope.go:117] "RemoveContainer" containerID="43d5691a3552e7c2d7e6aa05dd094621e377ecc88488a8e0c5598d77d496a181" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.411032 4813 scope.go:117] "RemoveContainer" containerID="2a277922b1d2931bedbc476b84bfcd968bab53c7c778a49f36382c68a2a67ab7" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.433864 4813 scope.go:117] "RemoveContainer" containerID="8439f0e4f753871ca0b6d1cd7f0234c50f6500f26d7675f32dbb5d90cee04305" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.456966 4813 scope.go:117] "RemoveContainer" containerID="c79525bf17e1747505b559eb8e125a6012f2aa8ff9aaa37562d972c623d802a0" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.477475 4813 scope.go:117] "RemoveContainer" containerID="6adae85f90a1da16b445e1a30fe09db98185ce36b6a45741031a9f7f69e1e630" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.504614 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.595050 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:51:39 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:51:39 crc kubenswrapper[4813]: > Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.621820 4813 scope.go:117] "RemoveContainer" containerID="2151bd31d0069b61def43848e29c57b6d08b542f9888b266dabceb722a50f8fa" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.621975 4813 scope.go:117] "RemoveContainer" containerID="aa2d95b74c8b460ce076d792421db9415d752e61eb487f0fbfdbe47d00194d5b" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.622061 4813 scope.go:117] "RemoveContainer" containerID="214e46a8b9a71b8264e51a0cf7e2d11786fdb4b5d0f1d240813790d9bee31895" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.796071 4813 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.801417 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.812027 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 10:51:39 crc kubenswrapper[4813]: I1125 10:51:39.957595 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.096554 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.142326 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.198154 4813 generic.go:334] "Generic (PLEG): container finished" podID="06c81a1e-0461-4457-85ea-1a4060423eda" containerID="22b74a0493a9b4ac5f4bd66b0e2e92cc280ba3864cf6078ec2e8672fbea90133" exitCode=1 Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.198232 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" event={"ID":"06c81a1e-0461-4457-85ea-1a4060423eda","Type":"ContainerDied","Data":"22b74a0493a9b4ac5f4bd66b0e2e92cc280ba3864cf6078ec2e8672fbea90133"} Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.198517 4813 scope.go:117] "RemoveContainer" containerID="2151bd31d0069b61def43848e29c57b6d08b542f9888b266dabceb722a50f8fa" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.199504 4813 scope.go:117] "RemoveContainer" containerID="22b74a0493a9b4ac5f4bd66b0e2e92cc280ba3864cf6078ec2e8672fbea90133" Nov 25 10:51:40 crc kubenswrapper[4813]: E1125 10:51:40.200028 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-858778c9dc-fs9sm_openstack-operators(06c81a1e-0461-4457-85ea-1a4060423eda)\"" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" podUID="06c81a1e-0461-4457-85ea-1a4060423eda" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.217486 4813 generic.go:334] "Generic (PLEG): container finished" podID="71c5bfc5-a289-4942-bc55-819f06787eb6" containerID="e400b372bef5517a7b482f5cba2a18963d9c857b5f7a86b80c9ecb2be398a4ca" exitCode=1 Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.217569 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" event={"ID":"71c5bfc5-a289-4942-bc55-819f06787eb6","Type":"ContainerDied","Data":"e400b372bef5517a7b482f5cba2a18963d9c857b5f7a86b80c9ecb2be398a4ca"} Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.218198 4813 scope.go:117] "RemoveContainer" containerID="e400b372bef5517a7b482f5cba2a18963d9c857b5f7a86b80c9ecb2be398a4ca" Nov 25 10:51:40 crc kubenswrapper[4813]: E1125 10:51:40.218423 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=glance-operator-controller-manager-547cf68667-6v6dd_openstack-operators(71c5bfc5-a289-4942-bc55-819f06787eb6)\"" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" podUID="71c5bfc5-a289-4942-bc55-819f06787eb6" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.227778 4813 generic.go:334] "Generic (PLEG): container finished" podID="eaf6f1c0-6585-4eba-8baf-942ed2503735" containerID="2d244e16ae4e8c25f8f8687fa0ea5badbf10a8bf54a80aeaaba3d5d52017d701" exitCode=1 Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.227850 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" event={"ID":"eaf6f1c0-6585-4eba-8baf-942ed2503735","Type":"ContainerDied","Data":"2d244e16ae4e8c25f8f8687fa0ea5badbf10a8bf54a80aeaaba3d5d52017d701"} Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.228548 4813 scope.go:117] "RemoveContainer" containerID="2d244e16ae4e8c25f8f8687fa0ea5badbf10a8bf54a80aeaaba3d5d52017d701" Nov 25 10:51:40 crc kubenswrapper[4813]: E1125 10:51:40.228847 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-f6dvp_openstack-operators(eaf6f1c0-6585-4eba-8baf-942ed2503735)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" podUID="eaf6f1c0-6585-4eba-8baf-942ed2503735" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.266978 4813 scope.go:117] "RemoveContainer" containerID="aa2d95b74c8b460ce076d792421db9415d752e61eb487f0fbfdbe47d00194d5b" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.315155 4813 scope.go:117] "RemoveContainer" containerID="214e46a8b9a71b8264e51a0cf7e2d11786fdb4b5d0f1d240813790d9bee31895" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.389105 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.555622 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.591483 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.817071 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 10:51:40 crc kubenswrapper[4813]: I1125 10:51:40.984597 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 10:51:41 crc kubenswrapper[4813]: I1125 10:51:41.465616 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 10:51:41 crc kubenswrapper[4813]: I1125 10:51:41.483918 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-pnvfx" Nov 25 10:51:41 crc kubenswrapper[4813]: I1125 10:51:41.622098 4813 scope.go:117] "RemoveContainer" containerID="4ffc2b4595865022305b801310818c2bd583c104890ff2594a5df89a6f821aad" Nov 25 10:51:41 crc kubenswrapper[4813]: E1125 10:51:41.622316 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-7f985d654d-9bjpb_cert-manager(396645a8-bd9a-429a-8d95-33dcec24c4ba)\"" pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" podUID="396645a8-bd9a-429a-8d95-33dcec24c4ba" Nov 25 10:51:41 crc kubenswrapper[4813]: I1125 10:51:41.669161 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 10:51:41 crc kubenswrapper[4813]: I1125 10:51:41.986121 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 10:51:42 crc kubenswrapper[4813]: I1125 10:51:42.164417 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 10:51:42 crc kubenswrapper[4813]: I1125 10:51:42.191632 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 10:51:42 crc kubenswrapper[4813]: I1125 10:51:42.369837 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 10:51:42 crc kubenswrapper[4813]: I1125 10:51:42.401020 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 10:51:42 crc kubenswrapper[4813]: I1125 10:51:42.575483 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 10:51:42 crc kubenswrapper[4813]: I1125 10:51:42.716333 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 10:51:42 crc kubenswrapper[4813]: I1125 10:51:42.870139 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 10:51:42 crc kubenswrapper[4813]: I1125 10:51:42.948480 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 10:51:43 crc kubenswrapper[4813]: I1125 10:51:43.045206 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6n2p7" Nov 25 10:51:43 crc kubenswrapper[4813]: I1125 10:51:43.060598 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 10:51:43 crc kubenswrapper[4813]: I1125 10:51:43.263622 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 10:51:43 crc kubenswrapper[4813]: I1125 10:51:43.736187 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 10:51:43 crc kubenswrapper[4813]: I1125 10:51:43.964532 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.007707 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.164345 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.329714 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="aea2efa1-cb45-4657-8ea6-efd7799cb0a4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.343860 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.343947 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.344828 4813 scope.go:117] "RemoveContainer" containerID="5a7ab610a3c323904b49fb346bfb5bfd21fe5707ab51c0de4176662641459056" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.345265 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-4wff2_openstack-operators(03c63a63-9a46-4bda-941b-8c5ba81a13fe)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" podUID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.361224 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.361286 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.362107 4813 scope.go:117] "RemoveContainer" containerID="e558d240a6bd77705f05b2797b29ea6fd8c416ff6fd3f1b978e06358afb10f7b" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.362497 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-dvfd9_openstack-operators(a650bdd3-2541-4b76-b5db-64273262bc06)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.380766 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.380829 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.381868 4813 scope.go:117] "RemoveContainer" containerID="7c6c417432731434aa1b2265b04ad2e7e1e9105be30859d914fe45bf0a6c9adb" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.382219 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-hjqzd_openstack-operators(aa2934d9-d547-49d0-9d06-232120b44fa1)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" podUID="aa2934d9-d547-49d0-9d06-232120b44fa1" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.415744 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.415814 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.416529 4813 scope.go:117] "RemoveContainer" containerID="e400b372bef5517a7b482f5cba2a18963d9c857b5f7a86b80c9ecb2be398a4ca" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.416826 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=glance-operator-controller-manager-547cf68667-6v6dd_openstack-operators(71c5bfc5-a289-4942-bc55-819f06787eb6)\"" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" podUID="71c5bfc5-a289-4942-bc55-819f06787eb6" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.457367 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.457416 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.458105 4813 scope.go:117] "RemoveContainer" containerID="2d244e16ae4e8c25f8f8687fa0ea5badbf10a8bf54a80aeaaba3d5d52017d701" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.458336 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-f6dvp_openstack-operators(eaf6f1c0-6585-4eba-8baf-942ed2503735)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" podUID="eaf6f1c0-6585-4eba-8baf-942ed2503735" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.471751 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.471819 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.472528 4813 scope.go:117] "RemoveContainer" containerID="bfd8fcbed80dd21da2cdcede7a0a9ad1efdc3d7bca2b44668e148ea6a5fdde0f" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.472810 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-8spkk_openstack-operators(af18e07e-95b3-476f-9604-824c36ae74a5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" podUID="af18e07e-95b3-476f-9604-824c36ae74a5" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.527866 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.570044 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.573758 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.574013 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.574943 4813 scope.go:117] "RemoveContainer" containerID="780d87d09c967309daa59ca92087c8228e2f4e65f95c031f844934a27f83e390" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.575328 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-blrjt_openstack-operators(d4a62556-e6e8-42dc-b7e4-180c40611393)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" podUID="d4a62556-e6e8-42dc-b7e4-180c40611393" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.591007 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.592209 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.598939 4813 scope.go:117] "RemoveContainer" containerID="22b74a0493a9b4ac5f4bd66b0e2e92cc280ba3864cf6078ec2e8672fbea90133" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.599754 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-858778c9dc-fs9sm_openstack-operators(06c81a1e-0461-4457-85ea-1a4060423eda)\"" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" podUID="06c81a1e-0461-4457-85ea-1a4060423eda" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.604096 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.604285 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:51:44 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:51:44 crc kubenswrapper[4813]: > Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.623746 4813 scope.go:117] "RemoveContainer" containerID="b1071257cf141e4ed949afbe28f925bacd737907f1dc1d027f282faf5869e5aa" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.624058 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=metallb-operator-controller-manager-6b84b955f5-mmrm7_metallb-system(a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b)\"" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.624087 4813 scope.go:117] "RemoveContainer" containerID="91295643a276fd8e2a13cfaa5b1900a0ab4fc266378e1218439d9294577c93cd" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.723797 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.748030 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.748194 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.748754 4813 scope.go:117] "RemoveContainer" containerID="30c767accdc9d5805bb70bfa2132237ce239b924316a2f4a373fc18a12755362" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.749020 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-jcjzx_openstack-operators(efca9205-8a59-45ce-8c50-36b0d0389f12)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" podUID="efca9205-8a59-45ce-8c50-36b0d0389f12" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.765897 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.766325 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.767123 4813 scope.go:117] "RemoveContainer" containerID="e573c49981c6852d396a8509942b6e6ccf6672cb0c8326cf0eb3d5e1a9e8c845" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.767732 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-76j46_openstack-operators(7921584b-8ce0-45b8-8a56-ab0fdde43582)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" podUID="7921584b-8ce0-45b8-8a56-ab0fdde43582" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.809540 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.809601 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.810300 4813 scope.go:117] "RemoveContainer" containerID="151f57154401a3ebcad0931e8f36a6408b85b56586e1d9593e65d6e3084ddc72" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.810525 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_openstack-operators(baf6f7bb-db50-4013-8b77-2b7e4c8101c2)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podUID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.828576 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.828876 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.829495 4813 scope.go:117] "RemoveContainer" containerID="3ee7e4d2a3463162d8318668cfd23d563d1861d1d089d820f02a7de59930eb4c" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.829735 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c6kw6_openstack-operators(b69526d6-6616-4536-a228-4cdb57e1881c)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.833657 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.833797 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.834266 4813 scope.go:117] "RemoveContainer" containerID="565871b19add60bb7d552c69efb73f85ee79a17dba2c06f7e68a57258c2ffb91" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.834511 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-6j272_openstack-operators(9374bbb0-b458-4c1c-a327-67bcbea83045)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.877780 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.878666 4813 scope.go:117] "RemoveContainer" containerID="bd668a2879e65c02ee79f8f9b65f7f57384a493033ef21531f79b7713fe13d84" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.879008 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-gjs27_openstack-operators(a31ffbb8-0255-45d6-9125-6cccc7b444ba)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" podUID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.879444 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.963470 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.970860 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.991489 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.991881 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:51:44 crc kubenswrapper[4813]: I1125 10:51:44.992800 4813 scope.go:117] "RemoveContainer" containerID="c6e8f4156c728ea163af25ae8d442259c066569139df204a7fa159b0e158d28e" Nov 25 10:51:44 crc kubenswrapper[4813]: E1125 10:51:44.993169 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-2d2x7_openstack-operators(9093a664-86f3-4349-bd13-0a5e4aca8036)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" podUID="9093a664-86f3-4349-bd13-0a5e4aca8036" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.029243 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.029302 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.030087 4813 scope.go:117] "RemoveContainer" containerID="f3d988bd30ecb6ef6616939c2676db805195343ec40184587311abe7a65d0fbb" Nov 25 10:51:45 crc kubenswrapper[4813]: E1125 10:51:45.030482 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-fjkzd_openstack-operators(94c3d2b4-f1bb-402d-a39d-78e16bee970b)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" podUID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.060564 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.060627 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.061495 4813 scope.go:117] "RemoveContainer" containerID="225d0710504e73b1c1e6fcdd9b093a28e2a5b60c4aff05ee71678b87909a28d3" Nov 25 10:51:45 crc kubenswrapper[4813]: E1125 10:51:45.061789 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.080498 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.106539 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.181435 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.181865 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.182723 4813 scope.go:117] "RemoveContainer" containerID="d43aa5619836b50806c3c6f8793ae57891628615f721eb1b1d852cb274d9a62e" Nov 25 10:51:45 crc kubenswrapper[4813]: E1125 10:51:45.182994 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tc2mg_openstack-operators(db556642-a360-4559-8cde-7c25d7a893e0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podUID="db556642-a360-4559-8cde-7c25d7a893e0" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.226386 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.226506 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.227084 4813 scope.go:117] "RemoveContainer" containerID="f6facba807807738a369dcafd72b8d71129ebba3f276630025dc6bc0ad7ff9f2" Nov 25 10:51:45 crc kubenswrapper[4813]: E1125 10:51:45.227315 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podUID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.244706 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.284575 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e9030c35-b810-4f59-b1e6-5daec39fcc6d","Type":"ContainerStarted","Data":"17e97d1f1615d6ce6039d0630c38f6ec2765a678b6cb22c925b45be0c9795fa6"} Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.285154 4813 scope.go:117] "RemoveContainer" containerID="565871b19add60bb7d552c69efb73f85ee79a17dba2c06f7e68a57258c2ffb91" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.285243 4813 scope.go:117] "RemoveContainer" containerID="f6facba807807738a369dcafd72b8d71129ebba3f276630025dc6bc0ad7ff9f2" Nov 25 10:51:45 crc kubenswrapper[4813]: E1125 10:51:45.285417 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-6j272_openstack-operators(9374bbb0-b458-4c1c-a327-67bcbea83045)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.285451 4813 scope.go:117] "RemoveContainer" containerID="30c767accdc9d5805bb70bfa2132237ce239b924316a2f4a373fc18a12755362" Nov 25 10:51:45 crc kubenswrapper[4813]: E1125 10:51:45.285482 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podUID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.285522 4813 scope.go:117] "RemoveContainer" containerID="22b74a0493a9b4ac5f4bd66b0e2e92cc280ba3864cf6078ec2e8672fbea90133" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.285592 4813 scope.go:117] "RemoveContainer" containerID="3ee7e4d2a3463162d8318668cfd23d563d1861d1d089d820f02a7de59930eb4c" Nov 25 10:51:45 crc kubenswrapper[4813]: E1125 10:51:45.285631 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-jcjzx_openstack-operators(efca9205-8a59-45ce-8c50-36b0d0389f12)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" podUID="efca9205-8a59-45ce-8c50-36b0d0389f12" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.285709 4813 scope.go:117] "RemoveContainer" containerID="780d87d09c967309daa59ca92087c8228e2f4e65f95c031f844934a27f83e390" Nov 25 10:51:45 crc kubenswrapper[4813]: E1125 10:51:45.285769 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-858778c9dc-fs9sm_openstack-operators(06c81a1e-0461-4457-85ea-1a4060423eda)\"" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" podUID="06c81a1e-0461-4457-85ea-1a4060423eda" Nov 25 10:51:45 crc kubenswrapper[4813]: E1125 10:51:45.285849 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c6kw6_openstack-operators(b69526d6-6616-4536-a228-4cdb57e1881c)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.285888 4813 scope.go:117] "RemoveContainer" containerID="bd668a2879e65c02ee79f8f9b65f7f57384a493033ef21531f79b7713fe13d84" Nov 25 10:51:45 crc kubenswrapper[4813]: E1125 10:51:45.285912 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-blrjt_openstack-operators(d4a62556-e6e8-42dc-b7e4-180c40611393)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" podUID="d4a62556-e6e8-42dc-b7e4-180c40611393" Nov 25 10:51:45 crc kubenswrapper[4813]: E1125 10:51:45.286096 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-gjs27_openstack-operators(a31ffbb8-0255-45d6-9125-6cccc7b444ba)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" podUID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.300824 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=38.514125152 podStartE2EDuration="1m27.300803791s" podCreationTimestamp="2025-11-25 10:50:18 +0000 UTC" firstStartedPulling="2025-11-25 10:50:56.212287547 +0000 UTC m=+1153.341997433" lastFinishedPulling="2025-11-25 10:51:44.998966196 +0000 UTC m=+1202.128676072" observedRunningTime="2025-11-25 10:51:45.297127807 +0000 UTC m=+1202.426837703" watchObservedRunningTime="2025-11-25 10:51:45.300803791 +0000 UTC m=+1202.430513677" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.358031 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.379008 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.621666 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.621698 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.622426 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.623069 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.674070 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.854386 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 10:51:45 crc kubenswrapper[4813]: I1125 10:51:45.972317 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.143919 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cdcjj/must-gather-5vk8l"] Nov 25 10:51:46 crc kubenswrapper[4813]: W1125 10:51:46.148902 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90d80d33_b519_4d67_97ba_1b8b828e917b.slice/crio-1bb52eebef73adea578e97137bfc9e2b708f102c1f3cb37ce54c7e7ee21424ff WatchSource:0}: Error finding container 1bb52eebef73adea578e97137bfc9e2b708f102c1f3cb37ce54c7e7ee21424ff: Status 404 returned error can't find the container with id 1bb52eebef73adea578e97137bfc9e2b708f102c1f3cb37ce54c7e7ee21424ff Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.170046 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.190411 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 10:51:46 crc kubenswrapper[4813]: W1125 10:51:46.192802 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c8efda2_acd3_4ecf_9295_0ad8d037ca94.slice/crio-f8672529ae0c9b8986110d34617e1c837e4eeff60c19cf691e9d2bce1e39703a WatchSource:0}: Error finding container f8672529ae0c9b8986110d34617e1c837e4eeff60c19cf691e9d2bce1e39703a: Status 404 returned error can't find the container with id f8672529ae0c9b8986110d34617e1c837e4eeff60c19cf691e9d2bce1e39703a Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.293012 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" event={"ID":"90d80d33-b519-4d67-97ba-1b8b828e917b","Type":"ContainerStarted","Data":"1bb52eebef73adea578e97137bfc9e2b708f102c1f3cb37ce54c7e7ee21424ff"} Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.295012 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7c8efda2-acd3-4ecf-9295-0ad8d037ca94","Type":"ContainerStarted","Data":"f8672529ae0c9b8986110d34617e1c837e4eeff60c19cf691e9d2bce1e39703a"} Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.296699 4813 generic.go:334] "Generic (PLEG): container finished" podID="18df3708-b841-4af2-acb4-de42ed8ec241" containerID="b2c4004dc2865166443f88e7d21c18740b51e3771cba93151a6015b0447aca61" exitCode=0 Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.296740 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" event={"ID":"18df3708-b841-4af2-acb4-de42ed8ec241","Type":"ContainerDied","Data":"b2c4004dc2865166443f88e7d21c18740b51e3771cba93151a6015b0447aca61"} Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.297369 4813 scope.go:117] "RemoveContainer" containerID="b2c4004dc2865166443f88e7d21c18740b51e3771cba93151a6015b0447aca61" Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.446692 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.448552 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.620827 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.621728 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.654805 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.754461 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-h8jr8" Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.787161 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.823446 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 10:51:46 crc kubenswrapper[4813]: I1125 10:51:46.909650 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.169749 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 10:51:47 crc kubenswrapper[4813]: W1125 10:51:47.300492 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04683d4b_dec7_42f6_9803_b301f1d449c3.slice/crio-ebf4d2538e5985d7ba8762c7e77db87f9c6ce14d0540ceebc7301f159fd612ea WatchSource:0}: Error finding container ebf4d2538e5985d7ba8762c7e77db87f9c6ce14d0540ceebc7301f159fd612ea: Status 404 returned error can't find the container with id ebf4d2538e5985d7ba8762c7e77db87f9c6ce14d0540ceebc7301f159fd612ea Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.312269 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" event={"ID":"18df3708-b841-4af2-acb4-de42ed8ec241","Type":"ContainerStarted","Data":"0fbda9f364ddee3a87cd462bf3324d49ccfe02ac70e15128086a0be5e4cbd30c"} Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.314622 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.315114 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-skdbx" Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.340171 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.418089 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.439083 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.454069 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.493321 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.515485 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.563120 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.645058 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.765838 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 10:51:47 crc kubenswrapper[4813]: I1125 10:51:47.854353 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 10:51:48 crc kubenswrapper[4813]: I1125 10:51:48.338340 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"04683d4b-dec7-42f6-9803-b301f1d449c3","Type":"ContainerStarted","Data":"ebf4d2538e5985d7ba8762c7e77db87f9c6ce14d0540ceebc7301f159fd612ea"} Nov 25 10:51:48 crc kubenswrapper[4813]: I1125 10:51:48.342748 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7c8efda2-acd3-4ecf-9295-0ad8d037ca94","Type":"ContainerStarted","Data":"c5f79d8c63c0afbf63bd126e4ee08897034943ee3363370dc16b66c9cb810b79"} Nov 25 10:51:48 crc kubenswrapper[4813]: I1125 10:51:48.348178 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 10:51:48 crc kubenswrapper[4813]: I1125 10:51:48.640098 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 10:51:48 crc kubenswrapper[4813]: I1125 10:51:48.672796 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 10:51:48 crc kubenswrapper[4813]: I1125 10:51:48.725887 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 10:51:48 crc kubenswrapper[4813]: I1125 10:51:48.840181 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 10:51:48 crc kubenswrapper[4813]: I1125 10:51:48.985132 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:51:48 crc kubenswrapper[4813]: I1125 10:51:48.985813 4813 scope.go:117] "RemoveContainer" containerID="bd462bbd41de67f216310e3db3aecff932f3fa06f9964903533c0cb109c5d29a" Nov 25 10:51:48 crc kubenswrapper[4813]: E1125 10:51:48.986110 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=openstack-operator-controller-manager-5ffc8f797b-hbwwd_openstack-operators(09bd1800-0aaa-4908-ac58-e0890a2a309f)\"" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" podUID="09bd1800-0aaa-4908-ac58-e0890a2a309f" Nov 25 10:51:48 crc kubenswrapper[4813]: I1125 10:51:48.986512 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:51:49 crc kubenswrapper[4813]: I1125 10:51:49.265179 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 10:51:49 crc kubenswrapper[4813]: I1125 10:51:49.353744 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"04683d4b-dec7-42f6-9803-b301f1d449c3","Type":"ContainerStarted","Data":"9398aaadec19a895b0c2624db5dfcd3c85ba6e61f1dcd7939b19f39f9ad667f6"} Nov 25 10:51:49 crc kubenswrapper[4813]: I1125 10:51:49.354806 4813 scope.go:117] "RemoveContainer" containerID="bd462bbd41de67f216310e3db3aecff932f3fa06f9964903533c0cb109c5d29a" Nov 25 10:51:49 crc kubenswrapper[4813]: E1125 10:51:49.355314 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=openstack-operator-controller-manager-5ffc8f797b-hbwwd_openstack-operators(09bd1800-0aaa-4908-ac58-e0890a2a309f)\"" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" podUID="09bd1800-0aaa-4908-ac58-e0890a2a309f" Nov 25 10:51:49 crc kubenswrapper[4813]: I1125 10:51:49.460255 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tx99p" Nov 25 10:51:49 crc kubenswrapper[4813]: I1125 10:51:49.593418 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:51:49 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:51:49 crc kubenswrapper[4813]: > Nov 25 10:51:49 crc kubenswrapper[4813]: I1125 10:51:49.843832 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 10:51:50 crc kubenswrapper[4813]: I1125 10:51:50.141659 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 10:51:50 crc kubenswrapper[4813]: I1125 10:51:50.617314 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rzl9k" Nov 25 10:51:50 crc kubenswrapper[4813]: I1125 10:51:50.623460 4813 scope.go:117] "RemoveContainer" containerID="08e7f311e38946acbfb35ae6b1a86c7ad47e62db1724f5a533a0c9ebfbd382a3" Nov 25 10:51:50 crc kubenswrapper[4813]: E1125 10:51:50.623657 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-qd4tx_openstack-operators(2bf03402-32ec-423d-a6af-657bc0cfeb15)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:51:50 crc kubenswrapper[4813]: I1125 10:51:50.630572 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 10:51:51 crc kubenswrapper[4813]: I1125 10:51:51.371207 4813 generic.go:334] "Generic (PLEG): container finished" podID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" containerID="6da1ff7d6fcae58f674efd8d3293596350556c5b00a3e9b7de75cac5015c696e" exitCode=0 Nov 25 10:51:51 crc kubenswrapper[4813]: I1125 10:51:51.371283 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf91d2ed-6d43-49b1-8010-1f59f38aea76","Type":"ContainerDied","Data":"6da1ff7d6fcae58f674efd8d3293596350556c5b00a3e9b7de75cac5015c696e"} Nov 25 10:51:51 crc kubenswrapper[4813]: I1125 10:51:51.371987 4813 scope.go:117] "RemoveContainer" containerID="6da1ff7d6fcae58f674efd8d3293596350556c5b00a3e9b7de75cac5015c696e" Nov 25 10:51:51 crc kubenswrapper[4813]: I1125 10:51:51.377017 4813 generic.go:334] "Generic (PLEG): container finished" podID="aea2efa1-cb45-4657-8ea6-efd7799cb0a4" containerID="913f926750f49dfe77513dbf4232783df214e00f76210764b2934cb0fad38b6b" exitCode=0 Nov 25 10:51:51 crc kubenswrapper[4813]: I1125 10:51:51.377064 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aea2efa1-cb45-4657-8ea6-efd7799cb0a4","Type":"ContainerDied","Data":"913f926750f49dfe77513dbf4232783df214e00f76210764b2934cb0fad38b6b"} Nov 25 10:51:51 crc kubenswrapper[4813]: I1125 10:51:51.377828 4813 scope.go:117] "RemoveContainer" containerID="913f926750f49dfe77513dbf4232783df214e00f76210764b2934cb0fad38b6b" Nov 25 10:51:51 crc kubenswrapper[4813]: I1125 10:51:51.417243 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-tnzzc" Nov 25 10:51:51 crc kubenswrapper[4813]: I1125 10:51:51.453709 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 10:51:51 crc kubenswrapper[4813]: I1125 10:51:51.612253 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 10:51:51 crc kubenswrapper[4813]: I1125 10:51:51.698619 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-j697j" Nov 25 10:51:51 crc kubenswrapper[4813]: I1125 10:51:51.767034 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.386962 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7c8efda2-acd3-4ecf-9295-0ad8d037ca94","Type":"ContainerStarted","Data":"899325baf026ebb3d6066e9c33259c89fd9e009aee8db6dc127d0a3a58762599"} Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.390471 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aea2efa1-cb45-4657-8ea6-efd7799cb0a4","Type":"ContainerStarted","Data":"698cac60a978b705288eed6b2f78eb558f90f0bcd382da2a9af737902cce4aca"} Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.390864 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.392408 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" event={"ID":"90d80d33-b519-4d67-97ba-1b8b828e917b","Type":"ContainerStarted","Data":"798caaa475c1034e2ef39591a630f1bb0b528da32dd1cf7acc1726a570970c5c"} Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.397936 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf91d2ed-6d43-49b1-8010-1f59f38aea76","Type":"ContainerStarted","Data":"b26dfe3bfa16a358ba34719d2d171b58f608011bb3d587e65bf248799c25b60a"} Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.398270 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.400560 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"04683d4b-dec7-42f6-9803-b301f1d449c3","Type":"ContainerStarted","Data":"bd3c27b54501cbac1f89ea3e5e76b31555088b962fc751d76a194115446836d8"} Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.428253 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=82.681505347 podStartE2EDuration="1m28.428228224s" podCreationTimestamp="2025-11-25 10:50:24 +0000 UTC" firstStartedPulling="2025-11-25 10:51:46.194689486 +0000 UTC m=+1203.324399372" lastFinishedPulling="2025-11-25 10:51:51.941412363 +0000 UTC m=+1209.071122249" observedRunningTime="2025-11-25 10:51:52.414581086 +0000 UTC m=+1209.544290972" watchObservedRunningTime="2025-11-25 10:51:52.428228224 +0000 UTC m=+1209.557938100" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.490843 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.506427 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=83.825188998 podStartE2EDuration="1m28.506409185s" podCreationTimestamp="2025-11-25 10:50:24 +0000 UTC" firstStartedPulling="2025-11-25 10:51:47.303734473 +0000 UTC m=+1204.433444349" lastFinishedPulling="2025-11-25 10:51:51.98495465 +0000 UTC m=+1209.114664536" observedRunningTime="2025-11-25 10:51:52.504208822 +0000 UTC m=+1209.633918718" watchObservedRunningTime="2025-11-25 10:51:52.506409185 +0000 UTC m=+1209.636119071" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.508011 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.904240 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cdcjj/crc-debug-wfh7d"] Nov 25 10:51:52 crc kubenswrapper[4813]: E1125 10:51:52.904637 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.904654 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 10:51:52 crc kubenswrapper[4813]: E1125 10:51:52.904702 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" containerName="init" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.904713 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" containerName="init" Nov 25 10:51:52 crc kubenswrapper[4813]: E1125 10:51:52.904739 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" containerName="dnsmasq-dns" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.904747 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" containerName="dnsmasq-dns" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.904964 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="78498723-5c73-4aa4-8480-ef20ce8593ac" containerName="dnsmasq-dns" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.904983 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.905744 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.999068 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6dtt\" (UniqueName: \"kubernetes.io/projected/064ff305-c1e7-4539-bb22-f4be9b8f1445-kube-api-access-b6dtt\") pod \"crc-debug-wfh7d\" (UID: \"064ff305-c1e7-4539-bb22-f4be9b8f1445\") " pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" Nov 25 10:51:52 crc kubenswrapper[4813]: I1125 10:51:52.999486 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/064ff305-c1e7-4539-bb22-f4be9b8f1445-host\") pod \"crc-debug-wfh7d\" (UID: \"064ff305-c1e7-4539-bb22-f4be9b8f1445\") " pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.101330 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6dtt\" (UniqueName: \"kubernetes.io/projected/064ff305-c1e7-4539-bb22-f4be9b8f1445-kube-api-access-b6dtt\") pod \"crc-debug-wfh7d\" (UID: \"064ff305-c1e7-4539-bb22-f4be9b8f1445\") " pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.103906 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/064ff305-c1e7-4539-bb22-f4be9b8f1445-host\") pod \"crc-debug-wfh7d\" (UID: \"064ff305-c1e7-4539-bb22-f4be9b8f1445\") " pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.104005 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/064ff305-c1e7-4539-bb22-f4be9b8f1445-host\") pod \"crc-debug-wfh7d\" (UID: \"064ff305-c1e7-4539-bb22-f4be9b8f1445\") " pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.120602 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6dtt\" (UniqueName: \"kubernetes.io/projected/064ff305-c1e7-4539-bb22-f4be9b8f1445-kube-api-access-b6dtt\") pod \"crc-debug-wfh7d\" (UID: \"064ff305-c1e7-4539-bb22-f4be9b8f1445\") " pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.126968 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-q54ck" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.133751 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4gr2w" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.231269 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.315982 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.381052 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.403755 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.410494 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" event={"ID":"064ff305-c1e7-4539-bb22-f4be9b8f1445","Type":"ContainerStarted","Data":"11b73aa0b0cb93453a2e5cf39cdeac36289f08d52dae296ece7659d3fd35a700"} Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.413304 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" event={"ID":"90d80d33-b519-4d67-97ba-1b8b828e917b","Type":"ContainerStarted","Data":"ba5c52abb0a377bea345a1da6bef0137070c0a6dfe996550bad543e2b5469636"} Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.431525 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" podStartSLOduration=41.733215064 podStartE2EDuration="47.431504906s" podCreationTimestamp="2025-11-25 10:51:06 +0000 UTC" firstStartedPulling="2025-11-25 10:51:46.151439357 +0000 UTC m=+1203.281149243" lastFinishedPulling="2025-11-25 10:51:51.849729199 +0000 UTC m=+1208.979439085" observedRunningTime="2025-11-25 10:51:53.428093259 +0000 UTC m=+1210.557803155" watchObservedRunningTime="2025-11-25 10:51:53.431504906 +0000 UTC m=+1210.561214792" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.436485 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.451305 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.628116 4813 scope.go:117] "RemoveContainer" containerID="4ffc2b4595865022305b801310818c2bd583c104890ff2594a5df89a6f821aad" Nov 25 10:51:53 crc kubenswrapper[4813]: I1125 10:51:53.798816 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 10:51:54 crc kubenswrapper[4813]: I1125 10:51:54.382098 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:54 crc kubenswrapper[4813]: I1125 10:51:54.404153 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:54 crc kubenswrapper[4813]: I1125 10:51:54.434110 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-9bjpb" event={"ID":"396645a8-bd9a-429a-8d95-33dcec24c4ba","Type":"ContainerStarted","Data":"eb33c267b1e7fbf120d9a1ece48047842e0054caa85ccf3b8fefabb8a8c9e40b"} Nov 25 10:51:54 crc kubenswrapper[4813]: I1125 10:51:54.450570 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 25 10:51:54 crc kubenswrapper[4813]: I1125 10:51:54.466367 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 25 10:51:54 crc kubenswrapper[4813]: I1125 10:51:54.537779 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 10:51:54 crc kubenswrapper[4813]: I1125 10:51:54.614045 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:51:54 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:51:54 crc kubenswrapper[4813]: > Nov 25 10:51:55 crc kubenswrapper[4813]: I1125 10:51:55.064447 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 10:51:55 crc kubenswrapper[4813]: I1125 10:51:55.586467 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 10:51:55 crc kubenswrapper[4813]: I1125 10:51:55.626791 4813 scope.go:117] "RemoveContainer" containerID="5a7ab610a3c323904b49fb346bfb5bfd21fe5707ab51c0de4176662641459056" Nov 25 10:51:55 crc kubenswrapper[4813]: E1125 10:51:55.627024 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-4wff2_openstack-operators(03c63a63-9a46-4bda-941b-8c5ba81a13fe)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" podUID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" Nov 25 10:51:55 crc kubenswrapper[4813]: I1125 10:51:55.627052 4813 scope.go:117] "RemoveContainer" containerID="b1071257cf141e4ed949afbe28f925bacd737907f1dc1d027f282faf5869e5aa" Nov 25 10:51:55 crc kubenswrapper[4813]: E1125 10:51:55.627215 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=metallb-operator-controller-manager-6b84b955f5-mmrm7_metallb-system(a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b)\"" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" podUID="a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b" Nov 25 10:51:56 crc kubenswrapper[4813]: I1125 10:51:56.621998 4813 scope.go:117] "RemoveContainer" containerID="f6facba807807738a369dcafd72b8d71129ebba3f276630025dc6bc0ad7ff9f2" Nov 25 10:51:56 crc kubenswrapper[4813]: I1125 10:51:56.622176 4813 scope.go:117] "RemoveContainer" containerID="151f57154401a3ebcad0931e8f36a6408b85b56586e1d9593e65d6e3084ddc72" Nov 25 10:51:56 crc kubenswrapper[4813]: E1125 10:51:56.622232 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podUID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" Nov 25 10:51:56 crc kubenswrapper[4813]: I1125 10:51:56.622297 4813 scope.go:117] "RemoveContainer" containerID="565871b19add60bb7d552c69efb73f85ee79a17dba2c06f7e68a57258c2ffb91" Nov 25 10:51:56 crc kubenswrapper[4813]: I1125 10:51:56.622397 4813 scope.go:117] "RemoveContainer" containerID="7c6c417432731434aa1b2265b04ad2e7e1e9105be30859d914fe45bf0a6c9adb" Nov 25 10:51:56 crc kubenswrapper[4813]: E1125 10:51:56.622470 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_openstack-operators(baf6f7bb-db50-4013-8b77-2b7e4c8101c2)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podUID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" Nov 25 10:51:56 crc kubenswrapper[4813]: E1125 10:51:56.622537 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-6j272_openstack-operators(9374bbb0-b458-4c1c-a327-67bcbea83045)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" Nov 25 10:51:56 crc kubenswrapper[4813]: E1125 10:51:56.622621 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-hjqzd_openstack-operators(aa2934d9-d547-49d0-9d06-232120b44fa1)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" podUID="aa2934d9-d547-49d0-9d06-232120b44fa1" Nov 25 10:51:56 crc kubenswrapper[4813]: I1125 10:51:56.747413 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nmx22" Nov 25 10:51:56 crc kubenswrapper[4813]: I1125 10:51:56.852206 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 10:51:57 crc kubenswrapper[4813]: I1125 10:51:57.621486 4813 scope.go:117] "RemoveContainer" containerID="e573c49981c6852d396a8509942b6e6ccf6672cb0c8326cf0eb3d5e1a9e8c845" Nov 25 10:51:57 crc kubenswrapper[4813]: E1125 10:51:57.621960 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-76j46_openstack-operators(7921584b-8ce0-45b8-8a56-ab0fdde43582)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" podUID="7921584b-8ce0-45b8-8a56-ab0fdde43582" Nov 25 10:51:57 crc kubenswrapper[4813]: I1125 10:51:57.622088 4813 scope.go:117] "RemoveContainer" containerID="2d244e16ae4e8c25f8f8687fa0ea5badbf10a8bf54a80aeaaba3d5d52017d701" Nov 25 10:51:57 crc kubenswrapper[4813]: E1125 10:51:57.622473 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-f6dvp_openstack-operators(eaf6f1c0-6585-4eba-8baf-942ed2503735)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" podUID="eaf6f1c0-6585-4eba-8baf-942ed2503735" Nov 25 10:51:57 crc kubenswrapper[4813]: I1125 10:51:57.622939 4813 scope.go:117] "RemoveContainer" containerID="e400b372bef5517a7b482f5cba2a18963d9c857b5f7a86b80c9ecb2be398a4ca" Nov 25 10:51:57 crc kubenswrapper[4813]: E1125 10:51:57.623104 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=glance-operator-controller-manager-547cf68667-6v6dd_openstack-operators(71c5bfc5-a289-4942-bc55-819f06787eb6)\"" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" podUID="71c5bfc5-a289-4942-bc55-819f06787eb6" Nov 25 10:51:57 crc kubenswrapper[4813]: I1125 10:51:57.623203 4813 scope.go:117] "RemoveContainer" containerID="bd668a2879e65c02ee79f8f9b65f7f57384a493033ef21531f79b7713fe13d84" Nov 25 10:51:57 crc kubenswrapper[4813]: E1125 10:51:57.623358 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-gjs27_openstack-operators(a31ffbb8-0255-45d6-9125-6cccc7b444ba)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" podUID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" Nov 25 10:51:57 crc kubenswrapper[4813]: I1125 10:51:57.623448 4813 scope.go:117] "RemoveContainer" containerID="bfd8fcbed80dd21da2cdcede7a0a9ad1efdc3d7bca2b44668e148ea6a5fdde0f" Nov 25 10:51:57 crc kubenswrapper[4813]: E1125 10:51:57.623616 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-8spkk_openstack-operators(af18e07e-95b3-476f-9604-824c36ae74a5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" podUID="af18e07e-95b3-476f-9604-824c36ae74a5" Nov 25 10:51:58 crc kubenswrapper[4813]: I1125 10:51:58.622055 4813 scope.go:117] "RemoveContainer" containerID="e558d240a6bd77705f05b2797b29ea6fd8c416ff6fd3f1b978e06358afb10f7b" Nov 25 10:51:58 crc kubenswrapper[4813]: E1125 10:51:58.622448 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-dvfd9_openstack-operators(a650bdd3-2541-4b76-b5db-64273262bc06)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:51:58 crc kubenswrapper[4813]: I1125 10:51:58.622569 4813 scope.go:117] "RemoveContainer" containerID="d43aa5619836b50806c3c6f8793ae57891628615f721eb1b1d852cb274d9a62e" Nov 25 10:51:58 crc kubenswrapper[4813]: E1125 10:51:58.622948 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tc2mg_openstack-operators(db556642-a360-4559-8cde-7c25d7a893e0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podUID="db556642-a360-4559-8cde-7c25d7a893e0" Nov 25 10:51:59 crc kubenswrapper[4813]: I1125 10:51:59.271760 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 10:51:59 crc kubenswrapper[4813]: I1125 10:51:59.601608 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:51:59 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:51:59 crc kubenswrapper[4813]: > Nov 25 10:51:59 crc kubenswrapper[4813]: I1125 10:51:59.621483 4813 scope.go:117] "RemoveContainer" containerID="c6e8f4156c728ea163af25ae8d442259c066569139df204a7fa159b0e158d28e" Nov 25 10:51:59 crc kubenswrapper[4813]: I1125 10:51:59.621787 4813 scope.go:117] "RemoveContainer" containerID="f3d988bd30ecb6ef6616939c2676db805195343ec40184587311abe7a65d0fbb" Nov 25 10:51:59 crc kubenswrapper[4813]: E1125 10:51:59.621931 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-2d2x7_openstack-operators(9093a664-86f3-4349-bd13-0a5e4aca8036)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" podUID="9093a664-86f3-4349-bd13-0a5e4aca8036" Nov 25 10:51:59 crc kubenswrapper[4813]: E1125 10:51:59.622037 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-fjkzd_openstack-operators(94c3d2b4-f1bb-402d-a39d-78e16bee970b)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" podUID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" Nov 25 10:51:59 crc kubenswrapper[4813]: I1125 10:51:59.622319 4813 scope.go:117] "RemoveContainer" containerID="30c767accdc9d5805bb70bfa2132237ce239b924316a2f4a373fc18a12755362" Nov 25 10:51:59 crc kubenswrapper[4813]: E1125 10:51:59.622507 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-jcjzx_openstack-operators(efca9205-8a59-45ce-8c50-36b0d0389f12)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" podUID="efca9205-8a59-45ce-8c50-36b0d0389f12" Nov 25 10:52:00 crc kubenswrapper[4813]: I1125 10:52:00.621235 4813 scope.go:117] "RemoveContainer" containerID="780d87d09c967309daa59ca92087c8228e2f4e65f95c031f844934a27f83e390" Nov 25 10:52:00 crc kubenswrapper[4813]: E1125 10:52:00.621842 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-blrjt_openstack-operators(d4a62556-e6e8-42dc-b7e4-180c40611393)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" podUID="d4a62556-e6e8-42dc-b7e4-180c40611393" Nov 25 10:52:00 crc kubenswrapper[4813]: I1125 10:52:00.622370 4813 scope.go:117] "RemoveContainer" containerID="3ee7e4d2a3463162d8318668cfd23d563d1861d1d089d820f02a7de59930eb4c" Nov 25 10:52:00 crc kubenswrapper[4813]: E1125 10:52:00.622558 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c6kw6_openstack-operators(b69526d6-6616-4536-a228-4cdb57e1881c)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:52:00 crc kubenswrapper[4813]: I1125 10:52:00.623008 4813 scope.go:117] "RemoveContainer" containerID="22b74a0493a9b4ac5f4bd66b0e2e92cc280ba3864cf6078ec2e8672fbea90133" Nov 25 10:52:00 crc kubenswrapper[4813]: E1125 10:52:00.623203 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-858778c9dc-fs9sm_openstack-operators(06c81a1e-0461-4457-85ea-1a4060423eda)\"" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" podUID="06c81a1e-0461-4457-85ea-1a4060423eda" Nov 25 10:52:00 crc kubenswrapper[4813]: I1125 10:52:00.624070 4813 scope.go:117] "RemoveContainer" containerID="225d0710504e73b1c1e6fcdd9b093a28e2a5b60c4aff05ee71678b87909a28d3" Nov 25 10:52:00 crc kubenswrapper[4813]: E1125 10:52:00.624273 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:52:01 crc kubenswrapper[4813]: I1125 10:52:01.621137 4813 scope.go:117] "RemoveContainer" containerID="08e7f311e38946acbfb35ae6b1a86c7ad47e62db1724f5a533a0c9ebfbd382a3" Nov 25 10:52:01 crc kubenswrapper[4813]: E1125 10:52:01.621431 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-qd4tx_openstack-operators(2bf03402-32ec-423d-a6af-657bc0cfeb15)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:52:03 crc kubenswrapper[4813]: I1125 10:52:03.629761 4813 scope.go:117] "RemoveContainer" containerID="bd462bbd41de67f216310e3db3aecff932f3fa06f9964903533c0cb109c5d29a" Nov 25 10:52:03 crc kubenswrapper[4813]: E1125 10:52:03.630402 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=openstack-operator-controller-manager-5ffc8f797b-hbwwd_openstack-operators(09bd1800-0aaa-4908-ac58-e0890a2a309f)\"" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" podUID="09bd1800-0aaa-4908-ac58-e0890a2a309f" Nov 25 10:52:04 crc kubenswrapper[4813]: I1125 10:52:04.331494 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="aea2efa1-cb45-4657-8ea6-efd7799cb0a4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 25 10:52:04 crc kubenswrapper[4813]: I1125 10:52:04.606776 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 25 10:52:04 crc kubenswrapper[4813]: I1125 10:52:04.612464 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:04 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:04 crc kubenswrapper[4813]: > Nov 25 10:52:05 crc kubenswrapper[4813]: I1125 10:52:05.548631 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" event={"ID":"064ff305-c1e7-4539-bb22-f4be9b8f1445","Type":"ContainerStarted","Data":"2c71ea5a6fe8208692095f2dfe4b548efb8e7b7138ac8a25edf7c7c9db638185"} Nov 25 10:52:05 crc kubenswrapper[4813]: I1125 10:52:05.566499 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" podStartSLOduration=2.335115586 podStartE2EDuration="13.56647978s" podCreationTimestamp="2025-11-25 10:51:52 +0000 UTC" firstStartedPulling="2025-11-25 10:51:53.266193429 +0000 UTC m=+1210.395903305" lastFinishedPulling="2025-11-25 10:52:04.497557613 +0000 UTC m=+1221.627267499" observedRunningTime="2025-11-25 10:52:05.562105445 +0000 UTC m=+1222.691815331" watchObservedRunningTime="2025-11-25 10:52:05.56647978 +0000 UTC m=+1222.696189666" Nov 25 10:52:06 crc kubenswrapper[4813]: I1125 10:52:06.621709 4813 scope.go:117] "RemoveContainer" containerID="5a7ab610a3c323904b49fb346bfb5bfd21fe5707ab51c0de4176662641459056" Nov 25 10:52:06 crc kubenswrapper[4813]: E1125 10:52:06.622570 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-4wff2_openstack-operators(03c63a63-9a46-4bda-941b-8c5ba81a13fe)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" podUID="03c63a63-9a46-4bda-941b-8c5ba81a13fe" Nov 25 10:52:08 crc kubenswrapper[4813]: I1125 10:52:08.623191 4813 scope.go:117] "RemoveContainer" containerID="f6facba807807738a369dcafd72b8d71129ebba3f276630025dc6bc0ad7ff9f2" Nov 25 10:52:08 crc kubenswrapper[4813]: E1125 10:52:08.623820 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-bpbjt_openstack-operators(48ea1018-a88f-4ef0-a82f-7e3b012522ec)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" podUID="48ea1018-a88f-4ef0-a82f-7e3b012522ec" Nov 25 10:52:09 crc kubenswrapper[4813]: I1125 10:52:09.616045 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:09 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:09 crc kubenswrapper[4813]: > Nov 25 10:52:09 crc kubenswrapper[4813]: I1125 10:52:09.624302 4813 scope.go:117] "RemoveContainer" containerID="e573c49981c6852d396a8509942b6e6ccf6672cb0c8326cf0eb3d5e1a9e8c845" Nov 25 10:52:09 crc kubenswrapper[4813]: E1125 10:52:09.625970 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-76j46_openstack-operators(7921584b-8ce0-45b8-8a56-ab0fdde43582)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" podUID="7921584b-8ce0-45b8-8a56-ab0fdde43582" Nov 25 10:52:09 crc kubenswrapper[4813]: I1125 10:52:09.626262 4813 scope.go:117] "RemoveContainer" containerID="bfd8fcbed80dd21da2cdcede7a0a9ad1efdc3d7bca2b44668e148ea6a5fdde0f" Nov 25 10:52:09 crc kubenswrapper[4813]: E1125 10:52:09.626614 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-8spkk_openstack-operators(af18e07e-95b3-476f-9604-824c36ae74a5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" podUID="af18e07e-95b3-476f-9604-824c36ae74a5" Nov 25 10:52:09 crc kubenswrapper[4813]: I1125 10:52:09.626773 4813 scope.go:117] "RemoveContainer" containerID="565871b19add60bb7d552c69efb73f85ee79a17dba2c06f7e68a57258c2ffb91" Nov 25 10:52:09 crc kubenswrapper[4813]: E1125 10:52:09.627235 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-6j272_openstack-operators(9374bbb0-b458-4c1c-a327-67bcbea83045)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" podUID="9374bbb0-b458-4c1c-a327-67bcbea83045" Nov 25 10:52:10 crc kubenswrapper[4813]: I1125 10:52:10.621579 4813 scope.go:117] "RemoveContainer" containerID="30c767accdc9d5805bb70bfa2132237ce239b924316a2f4a373fc18a12755362" Nov 25 10:52:10 crc kubenswrapper[4813]: I1125 10:52:10.622517 4813 scope.go:117] "RemoveContainer" containerID="bd668a2879e65c02ee79f8f9b65f7f57384a493033ef21531f79b7713fe13d84" Nov 25 10:52:10 crc kubenswrapper[4813]: I1125 10:52:10.622656 4813 scope.go:117] "RemoveContainer" containerID="b1071257cf141e4ed949afbe28f925bacd737907f1dc1d027f282faf5869e5aa" Nov 25 10:52:10 crc kubenswrapper[4813]: E1125 10:52:10.622773 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-jcjzx_openstack-operators(efca9205-8a59-45ce-8c50-36b0d0389f12)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" podUID="efca9205-8a59-45ce-8c50-36b0d0389f12" Nov 25 10:52:10 crc kubenswrapper[4813]: E1125 10:52:10.622915 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-gjs27_openstack-operators(a31ffbb8-0255-45d6-9125-6cccc7b444ba)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" podUID="a31ffbb8-0255-45d6-9125-6cccc7b444ba" Nov 25 10:52:10 crc kubenswrapper[4813]: I1125 10:52:10.623095 4813 scope.go:117] "RemoveContainer" containerID="c6e8f4156c728ea163af25ae8d442259c066569139df204a7fa159b0e158d28e" Nov 25 10:52:10 crc kubenswrapper[4813]: E1125 10:52:10.623341 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-2d2x7_openstack-operators(9093a664-86f3-4349-bd13-0a5e4aca8036)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" podUID="9093a664-86f3-4349-bd13-0a5e4aca8036" Nov 25 10:52:10 crc kubenswrapper[4813]: I1125 10:52:10.623947 4813 scope.go:117] "RemoveContainer" containerID="2d244e16ae4e8c25f8f8687fa0ea5badbf10a8bf54a80aeaaba3d5d52017d701" Nov 25 10:52:10 crc kubenswrapper[4813]: E1125 10:52:10.624226 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-f6dvp_openstack-operators(eaf6f1c0-6585-4eba-8baf-942ed2503735)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" podUID="eaf6f1c0-6585-4eba-8baf-942ed2503735" Nov 25 10:52:11 crc kubenswrapper[4813]: I1125 10:52:11.622950 4813 scope.go:117] "RemoveContainer" containerID="e400b372bef5517a7b482f5cba2a18963d9c857b5f7a86b80c9ecb2be398a4ca" Nov 25 10:52:11 crc kubenswrapper[4813]: I1125 10:52:11.623501 4813 scope.go:117] "RemoveContainer" containerID="151f57154401a3ebcad0931e8f36a6408b85b56586e1d9593e65d6e3084ddc72" Nov 25 10:52:11 crc kubenswrapper[4813]: I1125 10:52:11.623632 4813 scope.go:117] "RemoveContainer" containerID="225d0710504e73b1c1e6fcdd9b093a28e2a5b60c4aff05ee71678b87909a28d3" Nov 25 10:52:11 crc kubenswrapper[4813]: E1125 10:52:11.623758 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_openstack-operators(baf6f7bb-db50-4013-8b77-2b7e4c8101c2)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" podUID="baf6f7bb-db50-4013-8b77-2b7e4c8101c2" Nov 25 10:52:11 crc kubenswrapper[4813]: I1125 10:52:11.623796 4813 scope.go:117] "RemoveContainer" containerID="7c6c417432731434aa1b2265b04ad2e7e1e9105be30859d914fe45bf0a6c9adb" Nov 25 10:52:11 crc kubenswrapper[4813]: E1125 10:52:11.623867 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=glance-operator-controller-manager-547cf68667-6v6dd_openstack-operators(71c5bfc5-a289-4942-bc55-819f06787eb6)\"" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" podUID="71c5bfc5-a289-4942-bc55-819f06787eb6" Nov 25 10:52:11 crc kubenswrapper[4813]: E1125 10:52:11.623948 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-qplf9_openstack-operators(5f9254c7-c8dc-4504-bdf5-264c78e03b0c)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" podUID="5f9254c7-c8dc-4504-bdf5-264c78e03b0c" Nov 25 10:52:11 crc kubenswrapper[4813]: E1125 10:52:11.624187 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-hjqzd_openstack-operators(aa2934d9-d547-49d0-9d06-232120b44fa1)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" podUID="aa2934d9-d547-49d0-9d06-232120b44fa1" Nov 25 10:52:12 crc kubenswrapper[4813]: I1125 10:52:12.622329 4813 scope.go:117] "RemoveContainer" containerID="e558d240a6bd77705f05b2797b29ea6fd8c416ff6fd3f1b978e06358afb10f7b" Nov 25 10:52:12 crc kubenswrapper[4813]: I1125 10:52:12.622446 4813 scope.go:117] "RemoveContainer" containerID="f3d988bd30ecb6ef6616939c2676db805195343ec40184587311abe7a65d0fbb" Nov 25 10:52:12 crc kubenswrapper[4813]: E1125 10:52:12.622653 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-fjkzd_openstack-operators(94c3d2b4-f1bb-402d-a39d-78e16bee970b)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" podUID="94c3d2b4-f1bb-402d-a39d-78e16bee970b" Nov 25 10:52:12 crc kubenswrapper[4813]: E1125 10:52:12.622652 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-dvfd9_openstack-operators(a650bdd3-2541-4b76-b5db-64273262bc06)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" podUID="a650bdd3-2541-4b76-b5db-64273262bc06" Nov 25 10:52:12 crc kubenswrapper[4813]: I1125 10:52:12.623032 4813 scope.go:117] "RemoveContainer" containerID="08e7f311e38946acbfb35ae6b1a86c7ad47e62db1724f5a533a0c9ebfbd382a3" Nov 25 10:52:12 crc kubenswrapper[4813]: I1125 10:52:12.623101 4813 scope.go:117] "RemoveContainer" containerID="d43aa5619836b50806c3c6f8793ae57891628615f721eb1b1d852cb274d9a62e" Nov 25 10:52:12 crc kubenswrapper[4813]: E1125 10:52:12.623299 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-qd4tx_openstack-operators(2bf03402-32ec-423d-a6af-657bc0cfeb15)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" podUID="2bf03402-32ec-423d-a6af-657bc0cfeb15" Nov 25 10:52:12 crc kubenswrapper[4813]: E1125 10:52:12.623311 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tc2mg_openstack-operators(db556642-a360-4559-8cde-7c25d7a893e0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" podUID="db556642-a360-4559-8cde-7c25d7a893e0" Nov 25 10:52:13 crc kubenswrapper[4813]: I1125 10:52:13.627204 4813 scope.go:117] "RemoveContainer" containerID="3ee7e4d2a3463162d8318668cfd23d563d1861d1d089d820f02a7de59930eb4c" Nov 25 10:52:13 crc kubenswrapper[4813]: I1125 10:52:13.627507 4813 scope.go:117] "RemoveContainer" containerID="780d87d09c967309daa59ca92087c8228e2f4e65f95c031f844934a27f83e390" Nov 25 10:52:13 crc kubenswrapper[4813]: E1125 10:52:13.627533 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c6kw6_openstack-operators(b69526d6-6616-4536-a228-4cdb57e1881c)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" podUID="b69526d6-6616-4536-a228-4cdb57e1881c" Nov 25 10:52:13 crc kubenswrapper[4813]: E1125 10:52:13.628376 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-blrjt_openstack-operators(d4a62556-e6e8-42dc-b7e4-180c40611393)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" podUID="d4a62556-e6e8-42dc-b7e4-180c40611393" Nov 25 10:52:14 crc kubenswrapper[4813]: I1125 10:52:14.331372 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="aea2efa1-cb45-4657-8ea6-efd7799cb0a4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 25 10:52:14 crc kubenswrapper[4813]: I1125 10:52:14.594757 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:14 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:14 crc kubenswrapper[4813]: > Nov 25 10:52:14 crc kubenswrapper[4813]: I1125 10:52:14.604747 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 25 10:52:15 crc kubenswrapper[4813]: I1125 10:52:15.621586 4813 scope.go:117] "RemoveContainer" containerID="22b74a0493a9b4ac5f4bd66b0e2e92cc280ba3864cf6078ec2e8672fbea90133" Nov 25 10:52:15 crc kubenswrapper[4813]: E1125 10:52:15.622009 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-858778c9dc-fs9sm_openstack-operators(06c81a1e-0461-4457-85ea-1a4060423eda)\"" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" podUID="06c81a1e-0461-4457-85ea-1a4060423eda" Nov 25 10:52:17 crc kubenswrapper[4813]: I1125 10:52:17.621489 4813 scope.go:117] "RemoveContainer" containerID="bd462bbd41de67f216310e3db3aecff932f3fa06f9964903533c0cb109c5d29a" Nov 25 10:52:17 crc kubenswrapper[4813]: E1125 10:52:17.622106 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=openstack-operator-controller-manager-5ffc8f797b-hbwwd_openstack-operators(09bd1800-0aaa-4908-ac58-e0890a2a309f)\"" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" podUID="09bd1800-0aaa-4908-ac58-e0890a2a309f" Nov 25 10:52:19 crc kubenswrapper[4813]: I1125 10:52:19.598989 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:19 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:19 crc kubenswrapper[4813]: > Nov 25 10:52:20 crc kubenswrapper[4813]: I1125 10:52:20.885102 4813 scope.go:117] "RemoveContainer" containerID="30c767accdc9d5805bb70bfa2132237ce239b924316a2f4a373fc18a12755362" Nov 25 10:52:20 crc kubenswrapper[4813]: I1125 10:52:20.888555 4813 scope.go:117] "RemoveContainer" containerID="5a7ab610a3c323904b49fb346bfb5bfd21fe5707ab51c0de4176662641459056" Nov 25 10:52:21 crc kubenswrapper[4813]: I1125 10:52:21.966745 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:52:21 crc kubenswrapper[4813]: I1125 10:52:21.967125 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:52:22 crc kubenswrapper[4813]: I1125 10:52:22.622302 4813 scope.go:117] "RemoveContainer" containerID="e573c49981c6852d396a8509942b6e6ccf6672cb0c8326cf0eb3d5e1a9e8c845" Nov 25 10:52:22 crc kubenswrapper[4813]: I1125 10:52:22.622614 4813 scope.go:117] "RemoveContainer" containerID="7c6c417432731434aa1b2265b04ad2e7e1e9105be30859d914fe45bf0a6c9adb" Nov 25 10:52:22 crc kubenswrapper[4813]: I1125 10:52:22.622706 4813 scope.go:117] "RemoveContainer" containerID="f6facba807807738a369dcafd72b8d71129ebba3f276630025dc6bc0ad7ff9f2" Nov 25 10:52:22 crc kubenswrapper[4813]: I1125 10:52:22.622962 4813 scope.go:117] "RemoveContainer" containerID="225d0710504e73b1c1e6fcdd9b093a28e2a5b60c4aff05ee71678b87909a28d3" Nov 25 10:52:22 crc kubenswrapper[4813]: I1125 10:52:22.624034 4813 scope.go:117] "RemoveContainer" containerID="151f57154401a3ebcad0931e8f36a6408b85b56586e1d9593e65d6e3084ddc72" Nov 25 10:52:23 crc kubenswrapper[4813]: I1125 10:52:23.640729 4813 scope.go:117] "RemoveContainer" containerID="d43aa5619836b50806c3c6f8793ae57891628615f721eb1b1d852cb274d9a62e" Nov 25 10:52:23 crc kubenswrapper[4813]: I1125 10:52:23.643194 4813 scope.go:117] "RemoveContainer" containerID="bd668a2879e65c02ee79f8f9b65f7f57384a493033ef21531f79b7713fe13d84" Nov 25 10:52:23 crc kubenswrapper[4813]: I1125 10:52:23.647526 4813 scope.go:117] "RemoveContainer" containerID="2d244e16ae4e8c25f8f8687fa0ea5badbf10a8bf54a80aeaaba3d5d52017d701" Nov 25 10:52:24 crc kubenswrapper[4813]: I1125 10:52:24.330118 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="aea2efa1-cb45-4657-8ea6-efd7799cb0a4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 25 10:52:24 crc kubenswrapper[4813]: I1125 10:52:24.604049 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 25 10:52:24 crc kubenswrapper[4813]: I1125 10:52:24.610295 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:24 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:24 crc kubenswrapper[4813]: > Nov 25 10:52:24 crc kubenswrapper[4813]: I1125 10:52:24.622185 4813 scope.go:117] "RemoveContainer" containerID="e400b372bef5517a7b482f5cba2a18963d9c857b5f7a86b80c9ecb2be398a4ca" Nov 25 10:52:24 crc kubenswrapper[4813]: I1125 10:52:24.622389 4813 scope.go:117] "RemoveContainer" containerID="c6e8f4156c728ea163af25ae8d442259c066569139df204a7fa159b0e158d28e" Nov 25 10:52:24 crc kubenswrapper[4813]: I1125 10:52:24.622588 4813 scope.go:117] "RemoveContainer" containerID="565871b19add60bb7d552c69efb73f85ee79a17dba2c06f7e68a57258c2ffb91" Nov 25 10:52:24 crc kubenswrapper[4813]: I1125 10:52:24.622994 4813 scope.go:117] "RemoveContainer" containerID="bfd8fcbed80dd21da2cdcede7a0a9ad1efdc3d7bca2b44668e148ea6a5fdde0f" Nov 25 10:52:24 crc kubenswrapper[4813]: I1125 10:52:24.623338 4813 scope.go:117] "RemoveContainer" containerID="08e7f311e38946acbfb35ae6b1a86c7ad47e62db1724f5a533a0c9ebfbd382a3" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.944957 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" event={"ID":"9374bbb0-b458-4c1c-a327-67bcbea83045","Type":"ContainerStarted","Data":"fb0da571d2c5eebd93da1c3fa9f0fd5e33b75e9f900dfa803d340ae79b3f1b4b"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.945711 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.947236 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" event={"ID":"db556642-a360-4559-8cde-7c25d7a893e0","Type":"ContainerStarted","Data":"36dd466c344e1dd447c1dac85e62e360357939702cce4218c04d302e39a3e3a5"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.947435 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.950411 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" event={"ID":"aa2934d9-d547-49d0-9d06-232120b44fa1","Type":"ContainerStarted","Data":"9b5fffef8e4c1e32fc9899618b2bd8a2aba0fcb57e7e9a72e1ab68700b80a6a2"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.950570 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.952461 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" event={"ID":"48ea1018-a88f-4ef0-a82f-7e3b012522ec","Type":"ContainerStarted","Data":"8a1f0c5348481aaea5ff1be357ad8856848fc9df88e85c516f67839eb7e10db6"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.952644 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.956906 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" event={"ID":"eaf6f1c0-6585-4eba-8baf-942ed2503735","Type":"ContainerStarted","Data":"67564d0331e4f3762a6e1579035e045eb7ca3fddde7035e4c0d929490becc0d1"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.957085 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.959065 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" event={"ID":"af18e07e-95b3-476f-9604-824c36ae74a5","Type":"ContainerStarted","Data":"f01061a996d961820268313687dee89402ecaf588a1aabeb961b58147629c246"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.960005 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.962716 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" event={"ID":"a31ffbb8-0255-45d6-9125-6cccc7b444ba","Type":"ContainerStarted","Data":"fb33e23b8c8211c764d7054f137060146628f4776a70fbf324412ec0d97469a9"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.963319 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.965726 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" event={"ID":"baf6f7bb-db50-4013-8b77-2b7e4c8101c2","Type":"ContainerStarted","Data":"c399a61d060c3ef00fcb8a69af26263fb0f26be2eb18ee5a22cf34e7f616ac9a"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.965896 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.969463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" event={"ID":"7921584b-8ce0-45b8-8a56-ab0fdde43582","Type":"ContainerStarted","Data":"204b0202701b129e4b6d2b879209cd30c28387b3cf0ed1510ec259735c4dfea4"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.969838 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.971560 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qd4tx" event={"ID":"2bf03402-32ec-423d-a6af-657bc0cfeb15","Type":"ContainerStarted","Data":"e7ba0a5b44984da78aa185e3e0404396e302fe960d6815cb5e348988423ac910"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.979121 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" event={"ID":"9093a664-86f3-4349-bd13-0a5e4aca8036","Type":"ContainerStarted","Data":"a7b6a948048d52ec37e53e1b44102e6605645ae00ed11a32826cbbf0543c33b1"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.979334 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.983659 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" event={"ID":"03c63a63-9a46-4bda-941b-8c5ba81a13fe","Type":"ContainerStarted","Data":"a50581be9fc1650f54808b216fb43b905bb4c8013b14b8a87d4c367f42ee6a0b"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.987457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" event={"ID":"5f9254c7-c8dc-4504-bdf5-264c78e03b0c","Type":"ContainerStarted","Data":"2e3f7eb2a5fbe4c2d1d8a15481d56ff321b25de02930b4b15a23326eddd1d348"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.987881 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.991732 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" event={"ID":"71c5bfc5-a289-4942-bc55-819f06787eb6","Type":"ContainerStarted","Data":"4f135493fc7556e9c58590eb9bf8d69cfbf3d7c8a0ea8d519a1476fd177b928e"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.992140 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.994842 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" event={"ID":"efca9205-8a59-45ce-8c50-36b0d0389f12","Type":"ContainerStarted","Data":"ab107f279b9b1a2270fd41fb4500866c58a924f4ceafa1d7a3503669c9008023"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.995013 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.996910 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" event={"ID":"a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b","Type":"ContainerStarted","Data":"95960e87c183c77eca10798685e478273d9dd2ae368ed66dffb0e04d722d0c14"} Nov 25 10:52:25 crc kubenswrapper[4813]: I1125 10:52:25.997058 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:52:26 crc kubenswrapper[4813]: I1125 10:52:26.622111 4813 scope.go:117] "RemoveContainer" containerID="3ee7e4d2a3463162d8318668cfd23d563d1861d1d089d820f02a7de59930eb4c" Nov 25 10:52:26 crc kubenswrapper[4813]: I1125 10:52:26.625739 4813 scope.go:117] "RemoveContainer" containerID="e558d240a6bd77705f05b2797b29ea6fd8c416ff6fd3f1b978e06358afb10f7b" Nov 25 10:52:26 crc kubenswrapper[4813]: I1125 10:52:26.625944 4813 scope.go:117] "RemoveContainer" containerID="780d87d09c967309daa59ca92087c8228e2f4e65f95c031f844934a27f83e390" Nov 25 10:52:27 crc kubenswrapper[4813]: I1125 10:52:27.006152 4813 generic.go:334] "Generic (PLEG): container finished" podID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" containerID="b26dfe3bfa16a358ba34719d2d171b58f608011bb3d587e65bf248799c25b60a" exitCode=0 Nov 25 10:52:27 crc kubenswrapper[4813]: I1125 10:52:27.006227 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf91d2ed-6d43-49b1-8010-1f59f38aea76","Type":"ContainerDied","Data":"b26dfe3bfa16a358ba34719d2d171b58f608011bb3d587e65bf248799c25b60a"} Nov 25 10:52:27 crc kubenswrapper[4813]: I1125 10:52:27.006468 4813 scope.go:117] "RemoveContainer" containerID="6da1ff7d6fcae58f674efd8d3293596350556c5b00a3e9b7de75cac5015c696e" Nov 25 10:52:27 crc kubenswrapper[4813]: I1125 10:52:27.007189 4813 scope.go:117] "RemoveContainer" containerID="b26dfe3bfa16a358ba34719d2d171b58f608011bb3d587e65bf248799c25b60a" Nov 25 10:52:27 crc kubenswrapper[4813]: E1125 10:52:27.007463 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 10s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(bf91d2ed-6d43-49b1-8010-1f59f38aea76)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="bf91d2ed-6d43-49b1-8010-1f59f38aea76" Nov 25 10:52:27 crc kubenswrapper[4813]: I1125 10:52:27.009396 4813 generic.go:334] "Generic (PLEG): container finished" podID="aea2efa1-cb45-4657-8ea6-efd7799cb0a4" containerID="698cac60a978b705288eed6b2f78eb558f90f0bcd382da2a9af737902cce4aca" exitCode=0 Nov 25 10:52:27 crc kubenswrapper[4813]: I1125 10:52:27.009503 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aea2efa1-cb45-4657-8ea6-efd7799cb0a4","Type":"ContainerDied","Data":"698cac60a978b705288eed6b2f78eb558f90f0bcd382da2a9af737902cce4aca"} Nov 25 10:52:27 crc kubenswrapper[4813]: I1125 10:52:27.010175 4813 scope.go:117] "RemoveContainer" containerID="698cac60a978b705288eed6b2f78eb558f90f0bcd382da2a9af737902cce4aca" Nov 25 10:52:27 crc kubenswrapper[4813]: E1125 10:52:27.010395 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 10s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(aea2efa1-cb45-4657-8ea6-efd7799cb0a4)\"" pod="openstack/rabbitmq-server-0" podUID="aea2efa1-cb45-4657-8ea6-efd7799cb0a4" Nov 25 10:52:27 crc kubenswrapper[4813]: I1125 10:52:27.281759 4813 scope.go:117] "RemoveContainer" containerID="913f926750f49dfe77513dbf4232783df214e00f76210764b2934cb0fad38b6b" Nov 25 10:52:27 crc kubenswrapper[4813]: I1125 10:52:27.621384 4813 scope.go:117] "RemoveContainer" containerID="22b74a0493a9b4ac5f4bd66b0e2e92cc280ba3864cf6078ec2e8672fbea90133" Nov 25 10:52:27 crc kubenswrapper[4813]: I1125 10:52:27.621587 4813 scope.go:117] "RemoveContainer" containerID="f3d988bd30ecb6ef6616939c2676db805195343ec40184587311abe7a65d0fbb" Nov 25 10:52:28 crc kubenswrapper[4813]: I1125 10:52:28.021464 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" event={"ID":"a650bdd3-2541-4b76-b5db-64273262bc06","Type":"ContainerStarted","Data":"4219d21486e941436b2c7a3894d8ac0c220a4516040a3131124c7f40d5902fcf"} Nov 25 10:52:28 crc kubenswrapper[4813]: I1125 10:52:28.023317 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:52:28 crc kubenswrapper[4813]: I1125 10:52:28.028690 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" event={"ID":"06c81a1e-0461-4457-85ea-1a4060423eda","Type":"ContainerStarted","Data":"d5a5ce4e5c3a178b88b64d7e3cd7d4deee62b91f3a59f7ff7bb5b62e1af2200e"} Nov 25 10:52:28 crc kubenswrapper[4813]: I1125 10:52:28.028980 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:52:28 crc kubenswrapper[4813]: I1125 10:52:28.052031 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" event={"ID":"b69526d6-6616-4536-a228-4cdb57e1881c","Type":"ContainerStarted","Data":"fa4b29640532fab04beec09f2bc61a39b8827ee0dcd42c81eb2ca485ec9f5a41"} Nov 25 10:52:28 crc kubenswrapper[4813]: I1125 10:52:28.052741 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:52:28 crc kubenswrapper[4813]: I1125 10:52:28.060117 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" event={"ID":"94c3d2b4-f1bb-402d-a39d-78e16bee970b","Type":"ContainerStarted","Data":"4438c6135fe0c301c637f6438cf0f855c81632e1f909a399cd7aa626bf39be82"} Nov 25 10:52:28 crc kubenswrapper[4813]: I1125 10:52:28.060363 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:52:28 crc kubenswrapper[4813]: I1125 10:52:28.071555 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" event={"ID":"d4a62556-e6e8-42dc-b7e4-180c40611393","Type":"ContainerStarted","Data":"e2ac89cb1e92cff59c9891a29e56aa5d0117834daa365389df22bcc2c4d9f153"} Nov 25 10:52:28 crc kubenswrapper[4813]: I1125 10:52:28.072798 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:52:29 crc kubenswrapper[4813]: I1125 10:52:29.604022 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:29 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:29 crc kubenswrapper[4813]: > Nov 25 10:52:30 crc kubenswrapper[4813]: I1125 10:52:30.621878 4813 scope.go:117] "RemoveContainer" containerID="bd462bbd41de67f216310e3db3aecff932f3fa06f9964903533c0cb109c5d29a" Nov 25 10:52:31 crc kubenswrapper[4813]: I1125 10:52:31.107519 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" event={"ID":"09bd1800-0aaa-4908-ac58-e0890a2a309f","Type":"ContainerStarted","Data":"9011e85f99a94c9c99e506362f6d358e12aebeee0a38ba6e7567776866127d8d"} Nov 25 10:52:32 crc kubenswrapper[4813]: I1125 10:52:32.133278 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.343593 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.346236 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4wff2" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.364561 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-dvfd9" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.382859 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hjqzd" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.420479 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-547cf68667-6v6dd" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.461158 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-f6dvp" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.480516 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-8spkk" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.576502 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-blrjt" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.594821 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-fs9sm" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.662706 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:34 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:34 crc kubenswrapper[4813]: > Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.761089 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-jcjzx" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.766077 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-76j46" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.810877 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-5ldjd" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.836884 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c6kw6" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.840247 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-6j272" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.880320 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-gjs27" Nov 25 10:52:34 crc kubenswrapper[4813]: I1125 10:52:34.994033 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2d2x7" Nov 25 10:52:35 crc kubenswrapper[4813]: I1125 10:52:35.032058 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-fjkzd" Nov 25 10:52:35 crc kubenswrapper[4813]: I1125 10:52:35.063034 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-qplf9" Nov 25 10:52:35 crc kubenswrapper[4813]: I1125 10:52:35.185200 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tc2mg" Nov 25 10:52:35 crc kubenswrapper[4813]: I1125 10:52:35.230247 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-bpbjt" Nov 25 10:52:38 crc kubenswrapper[4813]: I1125 10:52:38.176800 4813 generic.go:334] "Generic (PLEG): container finished" podID="064ff305-c1e7-4539-bb22-f4be9b8f1445" containerID="2c71ea5a6fe8208692095f2dfe4b548efb8e7b7138ac8a25edf7c7c9db638185" exitCode=0 Nov 25 10:52:38 crc kubenswrapper[4813]: I1125 10:52:38.176847 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" event={"ID":"064ff305-c1e7-4539-bb22-f4be9b8f1445","Type":"ContainerDied","Data":"2c71ea5a6fe8208692095f2dfe4b548efb8e7b7138ac8a25edf7c7c9db638185"} Nov 25 10:52:38 crc kubenswrapper[4813]: I1125 10:52:38.621967 4813 scope.go:117] "RemoveContainer" containerID="698cac60a978b705288eed6b2f78eb558f90f0bcd382da2a9af737902cce4aca" Nov 25 10:52:38 crc kubenswrapper[4813]: I1125 10:52:38.992097 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5ffc8f797b-hbwwd" Nov 25 10:52:39 crc kubenswrapper[4813]: I1125 10:52:39.188159 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aea2efa1-cb45-4657-8ea6-efd7799cb0a4","Type":"ContainerStarted","Data":"ca1a05d39cf9b33814186e3b892724763ddfc8d4c695387f91410c8682bc841b"} Nov 25 10:52:39 crc kubenswrapper[4813]: I1125 10:52:39.188410 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 10:52:39 crc kubenswrapper[4813]: I1125 10:52:39.265283 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" Nov 25 10:52:39 crc kubenswrapper[4813]: I1125 10:52:39.294593 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cdcjj/crc-debug-wfh7d"] Nov 25 10:52:39 crc kubenswrapper[4813]: I1125 10:52:39.301091 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cdcjj/crc-debug-wfh7d"] Nov 25 10:52:39 crc kubenswrapper[4813]: I1125 10:52:39.342221 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6dtt\" (UniqueName: \"kubernetes.io/projected/064ff305-c1e7-4539-bb22-f4be9b8f1445-kube-api-access-b6dtt\") pod \"064ff305-c1e7-4539-bb22-f4be9b8f1445\" (UID: \"064ff305-c1e7-4539-bb22-f4be9b8f1445\") " Nov 25 10:52:39 crc kubenswrapper[4813]: I1125 10:52:39.342289 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/064ff305-c1e7-4539-bb22-f4be9b8f1445-host\") pod \"064ff305-c1e7-4539-bb22-f4be9b8f1445\" (UID: \"064ff305-c1e7-4539-bb22-f4be9b8f1445\") " Nov 25 10:52:39 crc kubenswrapper[4813]: I1125 10:52:39.342453 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/064ff305-c1e7-4539-bb22-f4be9b8f1445-host" (OuterVolumeSpecName: "host") pod "064ff305-c1e7-4539-bb22-f4be9b8f1445" (UID: "064ff305-c1e7-4539-bb22-f4be9b8f1445"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:52:39 crc kubenswrapper[4813]: I1125 10:52:39.343557 4813 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/064ff305-c1e7-4539-bb22-f4be9b8f1445-host\") on node \"crc\" DevicePath \"\"" Nov 25 10:52:39 crc kubenswrapper[4813]: I1125 10:52:39.353662 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064ff305-c1e7-4539-bb22-f4be9b8f1445-kube-api-access-b6dtt" (OuterVolumeSpecName: "kube-api-access-b6dtt") pod "064ff305-c1e7-4539-bb22-f4be9b8f1445" (UID: "064ff305-c1e7-4539-bb22-f4be9b8f1445"). InnerVolumeSpecName "kube-api-access-b6dtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:52:39 crc kubenswrapper[4813]: I1125 10:52:39.445549 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6dtt\" (UniqueName: \"kubernetes.io/projected/064ff305-c1e7-4539-bb22-f4be9b8f1445-kube-api-access-b6dtt\") on node \"crc\" DevicePath \"\"" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:39.604194 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:40 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:40 crc kubenswrapper[4813]: > Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:39.621783 4813 scope.go:117] "RemoveContainer" containerID="b26dfe3bfa16a358ba34719d2d171b58f608011bb3d587e65bf248799c25b60a" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:39.633900 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064ff305-c1e7-4539-bb22-f4be9b8f1445" path="/var/lib/kubelet/pods/064ff305-c1e7-4539-bb22-f4be9b8f1445/volumes" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.197238 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/crc-debug-wfh7d" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.197238 4813 scope.go:117] "RemoveContainer" containerID="2c71ea5a6fe8208692095f2dfe4b548efb8e7b7138ac8a25edf7c7c9db638185" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.201397 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bf91d2ed-6d43-49b1-8010-1f59f38aea76","Type":"ContainerStarted","Data":"12a9566dd4d0de1e1c88ae422e392dbf565701e6a27c317d823d22da7599b473"} Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.506094 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cdcjj/crc-debug-hv9gd"] Nov 25 10:52:40 crc kubenswrapper[4813]: E1125 10:52:40.506446 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064ff305-c1e7-4539-bb22-f4be9b8f1445" containerName="container-00" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.506462 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="064ff305-c1e7-4539-bb22-f4be9b8f1445" containerName="container-00" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.506636 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="064ff305-c1e7-4539-bb22-f4be9b8f1445" containerName="container-00" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.507209 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/crc-debug-hv9gd" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.509385 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cdcjj"/"default-dockercfg-9x4v8" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.566851 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6pr\" (UniqueName: \"kubernetes.io/projected/9277de39-22e3-485b-a1cb-9d9e4d21ee9b-kube-api-access-bf6pr\") pod \"crc-debug-hv9gd\" (UID: \"9277de39-22e3-485b-a1cb-9d9e4d21ee9b\") " pod="openshift-must-gather-cdcjj/crc-debug-hv9gd" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.566917 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9277de39-22e3-485b-a1cb-9d9e4d21ee9b-host\") pod \"crc-debug-hv9gd\" (UID: \"9277de39-22e3-485b-a1cb-9d9e4d21ee9b\") " pod="openshift-must-gather-cdcjj/crc-debug-hv9gd" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.668857 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf6pr\" (UniqueName: \"kubernetes.io/projected/9277de39-22e3-485b-a1cb-9d9e4d21ee9b-kube-api-access-bf6pr\") pod \"crc-debug-hv9gd\" (UID: \"9277de39-22e3-485b-a1cb-9d9e4d21ee9b\") " pod="openshift-must-gather-cdcjj/crc-debug-hv9gd" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.668912 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9277de39-22e3-485b-a1cb-9d9e4d21ee9b-host\") pod \"crc-debug-hv9gd\" (UID: \"9277de39-22e3-485b-a1cb-9d9e4d21ee9b\") " pod="openshift-must-gather-cdcjj/crc-debug-hv9gd" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.669113 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9277de39-22e3-485b-a1cb-9d9e4d21ee9b-host\") pod \"crc-debug-hv9gd\" (UID: \"9277de39-22e3-485b-a1cb-9d9e4d21ee9b\") " pod="openshift-must-gather-cdcjj/crc-debug-hv9gd" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.702367 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf6pr\" (UniqueName: \"kubernetes.io/projected/9277de39-22e3-485b-a1cb-9d9e4d21ee9b-kube-api-access-bf6pr\") pod \"crc-debug-hv9gd\" (UID: \"9277de39-22e3-485b-a1cb-9d9e4d21ee9b\") " pod="openshift-must-gather-cdcjj/crc-debug-hv9gd" Nov 25 10:52:40 crc kubenswrapper[4813]: I1125 10:52:40.826225 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/crc-debug-hv9gd" Nov 25 10:52:41 crc kubenswrapper[4813]: I1125 10:52:41.212398 4813 generic.go:334] "Generic (PLEG): container finished" podID="9277de39-22e3-485b-a1cb-9d9e4d21ee9b" containerID="cb20a32dd1578f04238ccc900ea53828198a927e282c013f83cde54485029a9b" exitCode=1 Nov 25 10:52:41 crc kubenswrapper[4813]: I1125 10:52:41.212528 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdcjj/crc-debug-hv9gd" event={"ID":"9277de39-22e3-485b-a1cb-9d9e4d21ee9b","Type":"ContainerDied","Data":"cb20a32dd1578f04238ccc900ea53828198a927e282c013f83cde54485029a9b"} Nov 25 10:52:41 crc kubenswrapper[4813]: I1125 10:52:41.212779 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdcjj/crc-debug-hv9gd" event={"ID":"9277de39-22e3-485b-a1cb-9d9e4d21ee9b","Type":"ContainerStarted","Data":"c0a73a7c9e26b008a578b059b0799e9c9be7243c8f1e271d662e92ad53fb66e6"} Nov 25 10:52:41 crc kubenswrapper[4813]: I1125 10:52:41.254457 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cdcjj/crc-debug-hv9gd"] Nov 25 10:52:41 crc kubenswrapper[4813]: I1125 10:52:41.260400 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cdcjj/crc-debug-hv9gd"] Nov 25 10:52:42 crc kubenswrapper[4813]: I1125 10:52:42.304866 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/crc-debug-hv9gd" Nov 25 10:52:42 crc kubenswrapper[4813]: I1125 10:52:42.401778 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf6pr\" (UniqueName: \"kubernetes.io/projected/9277de39-22e3-485b-a1cb-9d9e4d21ee9b-kube-api-access-bf6pr\") pod \"9277de39-22e3-485b-a1cb-9d9e4d21ee9b\" (UID: \"9277de39-22e3-485b-a1cb-9d9e4d21ee9b\") " Nov 25 10:52:42 crc kubenswrapper[4813]: I1125 10:52:42.401865 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9277de39-22e3-485b-a1cb-9d9e4d21ee9b-host\") pod \"9277de39-22e3-485b-a1cb-9d9e4d21ee9b\" (UID: \"9277de39-22e3-485b-a1cb-9d9e4d21ee9b\") " Nov 25 10:52:42 crc kubenswrapper[4813]: I1125 10:52:42.402028 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9277de39-22e3-485b-a1cb-9d9e4d21ee9b-host" (OuterVolumeSpecName: "host") pod "9277de39-22e3-485b-a1cb-9d9e4d21ee9b" (UID: "9277de39-22e3-485b-a1cb-9d9e4d21ee9b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:52:42 crc kubenswrapper[4813]: I1125 10:52:42.402431 4813 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9277de39-22e3-485b-a1cb-9d9e4d21ee9b-host\") on node \"crc\" DevicePath \"\"" Nov 25 10:52:42 crc kubenswrapper[4813]: I1125 10:52:42.409936 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9277de39-22e3-485b-a1cb-9d9e4d21ee9b-kube-api-access-bf6pr" (OuterVolumeSpecName: "kube-api-access-bf6pr") pod "9277de39-22e3-485b-a1cb-9d9e4d21ee9b" (UID: "9277de39-22e3-485b-a1cb-9d9e4d21ee9b"). InnerVolumeSpecName "kube-api-access-bf6pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:52:42 crc kubenswrapper[4813]: I1125 10:52:42.503636 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf6pr\" (UniqueName: \"kubernetes.io/projected/9277de39-22e3-485b-a1cb-9d9e4d21ee9b-kube-api-access-bf6pr\") on node \"crc\" DevicePath \"\"" Nov 25 10:52:42 crc kubenswrapper[4813]: I1125 10:52:42.680534 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:42 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:42 crc kubenswrapper[4813]: > Nov 25 10:52:43 crc kubenswrapper[4813]: I1125 10:52:43.232526 4813 scope.go:117] "RemoveContainer" containerID="cb20a32dd1578f04238ccc900ea53828198a927e282c013f83cde54485029a9b" Nov 25 10:52:43 crc kubenswrapper[4813]: I1125 10:52:43.233120 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/crc-debug-hv9gd" Nov 25 10:52:43 crc kubenswrapper[4813]: I1125 10:52:43.632267 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9277de39-22e3-485b-a1cb-9d9e4d21ee9b" path="/var/lib/kubelet/pods/9277de39-22e3-485b-a1cb-9d9e4d21ee9b/volumes" Nov 25 10:52:44 crc kubenswrapper[4813]: I1125 10:52:44.598513 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:44 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:44 crc kubenswrapper[4813]: > Nov 25 10:52:44 crc kubenswrapper[4813]: I1125 10:52:44.603415 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:52:49 crc kubenswrapper[4813]: I1125 10:52:49.630707 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:49 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:49 crc kubenswrapper[4813]: > Nov 25 10:52:49 crc kubenswrapper[4813]: I1125 10:52:49.970023 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-57d769cc4f-6bkfh_fe34d8fb-5b40-4191-8015-acb5ed8ea562/init/0.log" Nov 25 10:52:50 crc kubenswrapper[4813]: I1125 10:52:50.134172 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-57d769cc4f-6bkfh_fe34d8fb-5b40-4191-8015-acb5ed8ea562/init/0.log" Nov 25 10:52:50 crc kubenswrapper[4813]: I1125 10:52:50.152564 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-57d769cc4f-6bkfh_fe34d8fb-5b40-4191-8015-acb5ed8ea562/dnsmasq-dns/0.log" Nov 25 10:52:50 crc kubenswrapper[4813]: I1125 10:52:50.233016 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e9030c35-b810-4f59-b1e6-5daec39fcc6d/kube-state-metrics/3.log" Nov 25 10:52:50 crc kubenswrapper[4813]: I1125 10:52:50.374781 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e9030c35-b810-4f59-b1e6-5daec39fcc6d/kube-state-metrics/2.log" Nov 25 10:52:50 crc kubenswrapper[4813]: I1125 10:52:50.444794 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_11b88009-8577-4264-afbf-8aee9bfc90f8/memcached/0.log" Nov 25 10:52:50 crc kubenswrapper[4813]: I1125 10:52:50.737264 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0444b7b3-af36-4fca-80c6-8348adc42a58/mysql-bootstrap/0.log" Nov 25 10:52:50 crc kubenswrapper[4813]: I1125 10:52:50.862176 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0444b7b3-af36-4fca-80c6-8348adc42a58/mysql-bootstrap/0.log" Nov 25 10:52:50 crc kubenswrapper[4813]: I1125 10:52:50.911469 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0444b7b3-af36-4fca-80c6-8348adc42a58/galera/0.log" Nov 25 10:52:50 crc kubenswrapper[4813]: I1125 10:52:50.980942 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9005be17-9874-4f4f-bd91-39b3c74314ec/mysql-bootstrap/0.log" Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.184964 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9005be17-9874-4f4f-bd91-39b3c74314ec/mysql-bootstrap/0.log" Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.203080 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9005be17-9874-4f4f-bd91-39b3c74314ec/galera/0.log" Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.265850 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kzv7f_9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d/ovsdb-server-init/0.log" Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.409628 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kzv7f_9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d/ovs-vswitchd/0.log" Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.421300 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kzv7f_9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d/ovsdb-server-init/0.log" Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.438522 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kzv7f_9bf77eb8-82fb-4ad7-9cf8-57d017a0ce0d/ovsdb-server/0.log" Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.585922 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qjpvf_da545e4e-8f60-4fb5-93e8-d9e9014c3c74/ovn-controller/0.log" Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.720054 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_04683d4b-dec7-42f6-9803-b301f1d449c3/openstack-network-exporter/0.log" Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.722101 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_04683d4b-dec7-42f6-9803-b301f1d449c3/ovsdbserver-nb/0.log" Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.894591 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7c8efda2-acd3-4ecf-9295-0ad8d037ca94/openstack-network-exporter/0.log" Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.959703 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7c8efda2-acd3-4ecf-9295-0ad8d037ca94/ovsdbserver-sb/0.log" Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.967625 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:52:51 crc kubenswrapper[4813]: I1125 10:52:51.967728 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:52:52 crc kubenswrapper[4813]: I1125 10:52:52.118337 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bf91d2ed-6d43-49b1-8010-1f59f38aea76/setup-container/0.log" Nov 25 10:52:52 crc kubenswrapper[4813]: I1125 10:52:52.282673 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bf91d2ed-6d43-49b1-8010-1f59f38aea76/rabbitmq/1.log" Nov 25 10:52:52 crc kubenswrapper[4813]: I1125 10:52:52.292029 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bf91d2ed-6d43-49b1-8010-1f59f38aea76/setup-container/0.log" Nov 25 10:52:52 crc kubenswrapper[4813]: I1125 10:52:52.376496 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bf91d2ed-6d43-49b1-8010-1f59f38aea76/rabbitmq/2.log" Nov 25 10:52:52 crc kubenswrapper[4813]: I1125 10:52:52.517541 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aea2efa1-cb45-4657-8ea6-efd7799cb0a4/setup-container/0.log" Nov 25 10:52:52 crc kubenswrapper[4813]: I1125 10:52:52.683830 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aea2efa1-cb45-4657-8ea6-efd7799cb0a4/setup-container/0.log" Nov 25 10:52:52 crc kubenswrapper[4813]: I1125 10:52:52.722962 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aea2efa1-cb45-4657-8ea6-efd7799cb0a4/rabbitmq/1.log" Nov 25 10:52:52 crc kubenswrapper[4813]: I1125 10:52:52.751651 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aea2efa1-cb45-4657-8ea6-efd7799cb0a4/rabbitmq/2.log" Nov 25 10:52:54 crc kubenswrapper[4813]: I1125 10:52:54.347964 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 10:52:54 crc kubenswrapper[4813]: I1125 10:52:54.601333 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:54 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:54 crc kubenswrapper[4813]: > Nov 25 10:52:54 crc kubenswrapper[4813]: I1125 10:52:54.605834 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.604080 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qjpvf" podUID="da545e4e-8f60-4fb5-93e8-d9e9014c3c74" containerName="ovn-controller" probeResult="failure" output=< Nov 25 10:52:59 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 10:52:59 crc kubenswrapper[4813]: > Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.871362 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xc8rb"] Nov 25 10:52:59 crc kubenswrapper[4813]: E1125 10:52:59.872385 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9277de39-22e3-485b-a1cb-9d9e4d21ee9b" containerName="container-00" Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.872487 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9277de39-22e3-485b-a1cb-9d9e4d21ee9b" containerName="container-00" Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.872730 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9277de39-22e3-485b-a1cb-9d9e4d21ee9b" containerName="container-00" Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.873775 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.876637 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.892359 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xc8rb"] Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.910160 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfwzv\" (UniqueName: \"kubernetes.io/projected/cc77e4bd-fc51-4a06-9501-0bd8b905f831-kube-api-access-bfwzv\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.910224 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc77e4bd-fc51-4a06-9501-0bd8b905f831-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.910423 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cc77e4bd-fc51-4a06-9501-0bd8b905f831-ovn-rundir\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.910564 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cc77e4bd-fc51-4a06-9501-0bd8b905f831-ovs-rundir\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.910672 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc77e4bd-fc51-4a06-9501-0bd8b905f831-config\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:52:59 crc kubenswrapper[4813]: I1125 10:52:59.910751 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc77e4bd-fc51-4a06-9501-0bd8b905f831-combined-ca-bundle\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.013825 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cc77e4bd-fc51-4a06-9501-0bd8b905f831-ovs-rundir\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.013949 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc77e4bd-fc51-4a06-9501-0bd8b905f831-config\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.014010 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc77e4bd-fc51-4a06-9501-0bd8b905f831-combined-ca-bundle\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.014061 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfwzv\" (UniqueName: \"kubernetes.io/projected/cc77e4bd-fc51-4a06-9501-0bd8b905f831-kube-api-access-bfwzv\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.014115 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc77e4bd-fc51-4a06-9501-0bd8b905f831-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.014203 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cc77e4bd-fc51-4a06-9501-0bd8b905f831-ovn-rundir\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.014586 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cc77e4bd-fc51-4a06-9501-0bd8b905f831-ovn-rundir\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.014660 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cc77e4bd-fc51-4a06-9501-0bd8b905f831-ovs-rundir\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.016043 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc77e4bd-fc51-4a06-9501-0bd8b905f831-config\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.022313 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc77e4bd-fc51-4a06-9501-0bd8b905f831-combined-ca-bundle\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.027343 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gprw5"] Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.028195 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc77e4bd-fc51-4a06-9501-0bd8b905f831-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.029250 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.031868 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.048845 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfwzv\" (UniqueName: \"kubernetes.io/projected/cc77e4bd-fc51-4a06-9501-0bd8b905f831-kube-api-access-bfwzv\") pod \"ovn-controller-metrics-xc8rb\" (UID: \"cc77e4bd-fc51-4a06-9501-0bd8b905f831\") " pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.055480 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gprw5"] Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.115776 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-gprw5\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.116040 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-config\") pod \"dnsmasq-dns-5bf47b49b7-gprw5\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.116133 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-gprw5\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.116397 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fspkk\" (UniqueName: \"kubernetes.io/projected/52c7799e-eae2-4ef2-a163-4d1d18078b6d-kube-api-access-fspkk\") pod \"dnsmasq-dns-5bf47b49b7-gprw5\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.215651 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gprw5"] Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.217905 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fspkk\" (UniqueName: \"kubernetes.io/projected/52c7799e-eae2-4ef2-a163-4d1d18078b6d-kube-api-access-fspkk\") pod \"dnsmasq-dns-5bf47b49b7-gprw5\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.217986 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-gprw5\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.218014 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-config\") pod \"dnsmasq-dns-5bf47b49b7-gprw5\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.218047 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-gprw5\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.219230 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-gprw5\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: E1125 10:53:00.220070 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-fspkk ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" podUID="52c7799e-eae2-4ef2-a163-4d1d18078b6d" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.220588 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xc8rb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.220609 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-config\") pod \"dnsmasq-dns-5bf47b49b7-gprw5\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.220845 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-gprw5\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.253882 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-8hl8p"] Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.255802 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.261369 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.270299 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fspkk\" (UniqueName: \"kubernetes.io/projected/52c7799e-eae2-4ef2-a163-4d1d18078b6d-kube-api-access-fspkk\") pod \"dnsmasq-dns-5bf47b49b7-gprw5\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.305434 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.306890 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.317226 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.317420 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.317557 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-fsm58" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.317660 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.319107 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.319165 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-dns-svc\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.319229 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.319289 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4lqz\" (UniqueName: \"kubernetes.io/projected/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-kube-api-access-z4lqz\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.319327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-config\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.340577 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.363253 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8hl8p"] Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.407483 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qjpvf-config-xdz2m"] Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.409013 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.421106 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-config\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.421168 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4lqz\" (UniqueName: \"kubernetes.io/projected/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-kube-api-access-z4lqz\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.421203 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-config\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.421227 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d4sp\" (UniqueName: \"kubernetes.io/projected/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-kube-api-access-4d4sp\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.421251 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.421288 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.421308 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-dns-svc\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.421335 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-scripts\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.421354 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.421381 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.421401 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.421422 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.423219 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-config\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.423303 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.423759 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.425312 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-dns-svc\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.425596 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.425650 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.429766 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qjpvf-config-xdz2m"] Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.501492 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.518183 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4lqz\" (UniqueName: \"kubernetes.io/projected/a32a7936-1cf0-40af-b52c-8bd0d673cc7d-kube-api-access-z4lqz\") pod \"dnsmasq-dns-8554648995-8hl8p\" (UID: \"a32a7936-1cf0-40af-b52c-8bd0d673cc7d\") " pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.523625 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-config\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.525332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-config\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.525736 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-run-ovn\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.525783 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-run\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.525827 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-log-ovn\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.525844 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a68a160-be1d-4cd4-affa-8bff03a38908-scripts\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.525873 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7a68a160-be1d-4cd4-affa-8bff03a38908-additional-scripts\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.525898 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d4sp\" (UniqueName: \"kubernetes.io/projected/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-kube-api-access-4d4sp\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.525968 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.526004 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-647wn\" (UniqueName: \"kubernetes.io/projected/7a68a160-be1d-4cd4-affa-8bff03a38908-kube-api-access-647wn\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.526035 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-scripts\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.526056 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.526092 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.526119 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.528295 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-scripts\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.531414 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.533082 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.549266 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.550171 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.582570 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d4sp\" (UniqueName: \"kubernetes.io/projected/8e40c3f6-ef65-4ad6-96ca-598cd0d7c094-kube-api-access-4d4sp\") pod \"ovn-northd-0\" (UID: \"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094\") " pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.629298 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fspkk\" (UniqueName: \"kubernetes.io/projected/52c7799e-eae2-4ef2-a163-4d1d18078b6d-kube-api-access-fspkk\") pod \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.629411 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-config\") pod \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.629487 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-ovsdbserver-nb\") pod \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.629696 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-dns-svc\") pod \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\" (UID: \"52c7799e-eae2-4ef2-a163-4d1d18078b6d\") " Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.629900 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-run-ovn\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.629927 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-run\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.629963 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-log-ovn\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.629981 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a68a160-be1d-4cd4-affa-8bff03a38908-scripts\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.629998 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7a68a160-be1d-4cd4-affa-8bff03a38908-additional-scripts\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.630051 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-647wn\" (UniqueName: \"kubernetes.io/projected/7a68a160-be1d-4cd4-affa-8bff03a38908-kube-api-access-647wn\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.630207 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-config" (OuterVolumeSpecName: "config") pod "52c7799e-eae2-4ef2-a163-4d1d18078b6d" (UID: "52c7799e-eae2-4ef2-a163-4d1d18078b6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.630899 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-run\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.630978 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-log-ovn\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.631008 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52c7799e-eae2-4ef2-a163-4d1d18078b6d" (UID: "52c7799e-eae2-4ef2-a163-4d1d18078b6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.631007 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-run-ovn\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.631543 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7a68a160-be1d-4cd4-affa-8bff03a38908-additional-scripts\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.631820 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52c7799e-eae2-4ef2-a163-4d1d18078b6d" (UID: "52c7799e-eae2-4ef2-a163-4d1d18078b6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.632888 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a68a160-be1d-4cd4-affa-8bff03a38908-scripts\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.639864 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c7799e-eae2-4ef2-a163-4d1d18078b6d-kube-api-access-fspkk" (OuterVolumeSpecName: "kube-api-access-fspkk") pod "52c7799e-eae2-4ef2-a163-4d1d18078b6d" (UID: "52c7799e-eae2-4ef2-a163-4d1d18078b6d"). InnerVolumeSpecName "kube-api-access-fspkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.673323 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-647wn\" (UniqueName: \"kubernetes.io/projected/7a68a160-be1d-4cd4-affa-8bff03a38908-kube-api-access-647wn\") pod \"ovn-controller-qjpvf-config-xdz2m\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.708175 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.732914 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.732952 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.732962 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fspkk\" (UniqueName: \"kubernetes.io/projected/52c7799e-eae2-4ef2-a163-4d1d18078b6d-kube-api-access-fspkk\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.732972 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c7799e-eae2-4ef2-a163-4d1d18078b6d-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.751228 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.782165 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:00 crc kubenswrapper[4813]: I1125 10:53:00.886904 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xc8rb"] Nov 25 10:53:01 crc kubenswrapper[4813]: I1125 10:53:01.209808 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8hl8p"] Nov 25 10:53:01 crc kubenswrapper[4813]: I1125 10:53:01.329236 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qjpvf-config-xdz2m"] Nov 25 10:53:01 crc kubenswrapper[4813]: I1125 10:53:01.436264 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xc8rb" event={"ID":"cc77e4bd-fc51-4a06-9501-0bd8b905f831","Type":"ContainerStarted","Data":"bcb05727021860b891796a3e5eaf800da53f8adcb92e0e26138216e8b79d893e"} Nov 25 10:53:01 crc kubenswrapper[4813]: I1125 10:53:01.436311 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xc8rb" event={"ID":"cc77e4bd-fc51-4a06-9501-0bd8b905f831","Type":"ContainerStarted","Data":"2d610f5f13f0c24acf3fc63510914fb4b288d0a7aeb8e11b0d2bb8eabe4b4998"} Nov 25 10:53:01 crc kubenswrapper[4813]: I1125 10:53:01.438211 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qjpvf-config-xdz2m" event={"ID":"7a68a160-be1d-4cd4-affa-8bff03a38908","Type":"ContainerStarted","Data":"a144c4f5fe0ad0d09e06e4b1ee921b6719bee5fd56b502567b94645db8902ee0"} Nov 25 10:53:01 crc kubenswrapper[4813]: I1125 10:53:01.439425 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gprw5" Nov 25 10:53:01 crc kubenswrapper[4813]: I1125 10:53:01.439504 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8hl8p" event={"ID":"a32a7936-1cf0-40af-b52c-8bd0d673cc7d","Type":"ContainerStarted","Data":"23a7d589fc6d88587ebc333a470293b1bdec39626b82a2bb6f1c906179a84e86"} Nov 25 10:53:01 crc kubenswrapper[4813]: I1125 10:53:01.464833 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xc8rb" podStartSLOduration=2.464813523 podStartE2EDuration="2.464813523s" podCreationTimestamp="2025-11-25 10:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:53:01.452987907 +0000 UTC m=+1278.582697793" watchObservedRunningTime="2025-11-25 10:53:01.464813523 +0000 UTC m=+1278.594523409" Nov 25 10:53:01 crc kubenswrapper[4813]: I1125 10:53:01.507211 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gprw5"] Nov 25 10:53:01 crc kubenswrapper[4813]: I1125 10:53:01.517310 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gprw5"] Nov 25 10:53:01 crc kubenswrapper[4813]: I1125 10:53:01.524464 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 10:53:01 crc kubenswrapper[4813]: W1125 10:53:01.537819 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e40c3f6_ef65_4ad6_96ca_598cd0d7c094.slice/crio-5866bfd161a3ed5a973bf074a766e9847b9e4370eb8d057216add4b46f66b18f WatchSource:0}: Error finding container 5866bfd161a3ed5a973bf074a766e9847b9e4370eb8d057216add4b46f66b18f: Status 404 returned error can't find the container with id 5866bfd161a3ed5a973bf074a766e9847b9e4370eb8d057216add4b46f66b18f Nov 25 10:53:01 crc kubenswrapper[4813]: I1125 10:53:01.634918 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52c7799e-eae2-4ef2-a163-4d1d18078b6d" path="/var/lib/kubelet/pods/52c7799e-eae2-4ef2-a163-4d1d18078b6d/volumes" Nov 25 10:53:02 crc kubenswrapper[4813]: I1125 10:53:02.456503 4813 generic.go:334] "Generic (PLEG): container finished" podID="7a68a160-be1d-4cd4-affa-8bff03a38908" containerID="a842c717311b9d77bd230ca0651e6c69d493ab2dd108b1c2b511e057a24071fa" exitCode=0 Nov 25 10:53:02 crc kubenswrapper[4813]: I1125 10:53:02.456702 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qjpvf-config-xdz2m" event={"ID":"7a68a160-be1d-4cd4-affa-8bff03a38908","Type":"ContainerDied","Data":"a842c717311b9d77bd230ca0651e6c69d493ab2dd108b1c2b511e057a24071fa"} Nov 25 10:53:02 crc kubenswrapper[4813]: I1125 10:53:02.462186 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094","Type":"ContainerStarted","Data":"5866bfd161a3ed5a973bf074a766e9847b9e4370eb8d057216add4b46f66b18f"} Nov 25 10:53:02 crc kubenswrapper[4813]: I1125 10:53:02.471140 4813 generic.go:334] "Generic (PLEG): container finished" podID="a32a7936-1cf0-40af-b52c-8bd0d673cc7d" containerID="c10157aff1e62d7515cf676abaaab12cadb79c6e36348b6afdd7fef15c197482" exitCode=0 Nov 25 10:53:02 crc kubenswrapper[4813]: I1125 10:53:02.471194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8hl8p" event={"ID":"a32a7936-1cf0-40af-b52c-8bd0d673cc7d","Type":"ContainerDied","Data":"c10157aff1e62d7515cf676abaaab12cadb79c6e36348b6afdd7fef15c197482"} Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.405574 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.482054 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094","Type":"ContainerStarted","Data":"7f1df4feab7fcd60fafd2c15d43a6e1780d2294a51ab29ed574b8b560b3781de"} Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.804061 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.822639 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b84b955f5-mmrm7" Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.909399 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-run-ovn\") pod \"7a68a160-be1d-4cd4-affa-8bff03a38908\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.909491 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-log-ovn\") pod \"7a68a160-be1d-4cd4-affa-8bff03a38908\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.909533 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7a68a160-be1d-4cd4-affa-8bff03a38908-additional-scripts\") pod \"7a68a160-be1d-4cd4-affa-8bff03a38908\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.909595 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-647wn\" (UniqueName: \"kubernetes.io/projected/7a68a160-be1d-4cd4-affa-8bff03a38908-kube-api-access-647wn\") pod \"7a68a160-be1d-4cd4-affa-8bff03a38908\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.909658 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a68a160-be1d-4cd4-affa-8bff03a38908-scripts\") pod \"7a68a160-be1d-4cd4-affa-8bff03a38908\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.909740 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-run\") pod \"7a68a160-be1d-4cd4-affa-8bff03a38908\" (UID: \"7a68a160-be1d-4cd4-affa-8bff03a38908\") " Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.910886 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-run" (OuterVolumeSpecName: "var-run") pod "7a68a160-be1d-4cd4-affa-8bff03a38908" (UID: "7a68a160-be1d-4cd4-affa-8bff03a38908"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.911958 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7a68a160-be1d-4cd4-affa-8bff03a38908" (UID: "7a68a160-be1d-4cd4-affa-8bff03a38908"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.912009 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7a68a160-be1d-4cd4-affa-8bff03a38908" (UID: "7a68a160-be1d-4cd4-affa-8bff03a38908"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.912183 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a68a160-be1d-4cd4-affa-8bff03a38908-scripts" (OuterVolumeSpecName: "scripts") pod "7a68a160-be1d-4cd4-affa-8bff03a38908" (UID: "7a68a160-be1d-4cd4-affa-8bff03a38908"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.913551 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a68a160-be1d-4cd4-affa-8bff03a38908-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7a68a160-be1d-4cd4-affa-8bff03a38908" (UID: "7a68a160-be1d-4cd4-affa-8bff03a38908"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:53:03 crc kubenswrapper[4813]: I1125 10:53:03.918177 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a68a160-be1d-4cd4-affa-8bff03a38908-kube-api-access-647wn" (OuterVolumeSpecName: "kube-api-access-647wn") pod "7a68a160-be1d-4cd4-affa-8bff03a38908" (UID: "7a68a160-be1d-4cd4-affa-8bff03a38908"). InnerVolumeSpecName "kube-api-access-647wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.011565 4813 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.011612 4813 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7a68a160-be1d-4cd4-affa-8bff03a38908-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.011626 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-647wn\" (UniqueName: \"kubernetes.io/projected/7a68a160-be1d-4cd4-affa-8bff03a38908-kube-api-access-647wn\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.011638 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a68a160-be1d-4cd4-affa-8bff03a38908-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.011649 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.011658 4813 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7a68a160-be1d-4cd4-affa-8bff03a38908-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.409266 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.491796 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8e40c3f6-ef65-4ad6-96ca-598cd0d7c094","Type":"ContainerStarted","Data":"8b4e8da80ae57ed6c32bf4eb962bd6ebf2bebca6210a69381386ff0c5f514a29"} Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.492213 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.493820 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qjpvf-config-xdz2m" event={"ID":"7a68a160-be1d-4cd4-affa-8bff03a38908","Type":"ContainerDied","Data":"a144c4f5fe0ad0d09e06e4b1ee921b6719bee5fd56b502567b94645db8902ee0"} Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.493847 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a144c4f5fe0ad0d09e06e4b1ee921b6719bee5fd56b502567b94645db8902ee0" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.493846 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qjpvf-config-xdz2m" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.503361 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8hl8p" event={"ID":"a32a7936-1cf0-40af-b52c-8bd0d673cc7d","Type":"ContainerStarted","Data":"75fa019a17d58a1a77c4d1f9b677d4f5d73fe636f19c260404feb88150d88f4f"} Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.504479 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.520789 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.008608661 podStartE2EDuration="4.520763919s" podCreationTimestamp="2025-11-25 10:53:00 +0000 UTC" firstStartedPulling="2025-11-25 10:53:01.542217382 +0000 UTC m=+1278.671927268" lastFinishedPulling="2025-11-25 10:53:03.05437264 +0000 UTC m=+1280.184082526" observedRunningTime="2025-11-25 10:53:04.511051503 +0000 UTC m=+1281.640761409" watchObservedRunningTime="2025-11-25 10:53:04.520763919 +0000 UTC m=+1281.650473815" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.545915 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-8hl8p" podStartSLOduration=4.5458930429999995 podStartE2EDuration="4.545893043s" podCreationTimestamp="2025-11-25 10:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:53:04.543186246 +0000 UTC m=+1281.672896142" watchObservedRunningTime="2025-11-25 10:53:04.545893043 +0000 UTC m=+1281.675602929" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.592927 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qjpvf" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.878992 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.920218 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qjpvf-config-xdz2m"] Nov 25 10:53:04 crc kubenswrapper[4813]: I1125 10:53:04.928597 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qjpvf-config-xdz2m"] Nov 25 10:53:05 crc kubenswrapper[4813]: I1125 10:53:05.632058 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a68a160-be1d-4cd4-affa-8bff03a38908" path="/var/lib/kubelet/pods/7a68a160-be1d-4cd4-affa-8bff03a38908/volumes" Nov 25 10:53:05 crc kubenswrapper[4813]: I1125 10:53:05.871398 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:06 crc kubenswrapper[4813]: I1125 10:53:06.869249 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:06 crc kubenswrapper[4813]: I1125 10:53:06.870622 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:08 crc kubenswrapper[4813]: I1125 10:53:08.871338 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:08 crc kubenswrapper[4813]: I1125 10:53:08.871338 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:10 crc kubenswrapper[4813]: I1125 10:53:10.709865 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-8hl8p" Nov 25 10:53:10 crc kubenswrapper[4813]: I1125 10:53:10.775359 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6bkfh"] Nov 25 10:53:10 crc kubenswrapper[4813]: I1125 10:53:10.775997 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" podUID="fe34d8fb-5b40-4191-8015-acb5ed8ea562" containerName="dnsmasq-dns" containerID="cri-o://d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0" gracePeriod=10 Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.283470 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.437588 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjvcc\" (UniqueName: \"kubernetes.io/projected/fe34d8fb-5b40-4191-8015-acb5ed8ea562-kube-api-access-tjvcc\") pod \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\" (UID: \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\") " Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.438111 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe34d8fb-5b40-4191-8015-acb5ed8ea562-config\") pod \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\" (UID: \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\") " Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.438288 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe34d8fb-5b40-4191-8015-acb5ed8ea562-dns-svc\") pod \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\" (UID: \"fe34d8fb-5b40-4191-8015-acb5ed8ea562\") " Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.456309 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe34d8fb-5b40-4191-8015-acb5ed8ea562-kube-api-access-tjvcc" (OuterVolumeSpecName: "kube-api-access-tjvcc") pod "fe34d8fb-5b40-4191-8015-acb5ed8ea562" (UID: "fe34d8fb-5b40-4191-8015-acb5ed8ea562"). InnerVolumeSpecName "kube-api-access-tjvcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.488384 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe34d8fb-5b40-4191-8015-acb5ed8ea562-config" (OuterVolumeSpecName: "config") pod "fe34d8fb-5b40-4191-8015-acb5ed8ea562" (UID: "fe34d8fb-5b40-4191-8015-acb5ed8ea562"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.496123 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe34d8fb-5b40-4191-8015-acb5ed8ea562-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe34d8fb-5b40-4191-8015-acb5ed8ea562" (UID: "fe34d8fb-5b40-4191-8015-acb5ed8ea562"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.540325 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjvcc\" (UniqueName: \"kubernetes.io/projected/fe34d8fb-5b40-4191-8015-acb5ed8ea562-kube-api-access-tjvcc\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.540374 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe34d8fb-5b40-4191-8015-acb5ed8ea562-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.540386 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe34d8fb-5b40-4191-8015-acb5ed8ea562-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.557755 4813 generic.go:334] "Generic (PLEG): container finished" podID="fe34d8fb-5b40-4191-8015-acb5ed8ea562" containerID="d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0" exitCode=0 Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.557811 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.557811 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" event={"ID":"fe34d8fb-5b40-4191-8015-acb5ed8ea562","Type":"ContainerDied","Data":"d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0"} Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.557916 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6bkfh" event={"ID":"fe34d8fb-5b40-4191-8015-acb5ed8ea562","Type":"ContainerDied","Data":"8efc4af9dd622d1354fc662e13203da3d0869a4c858a281e7d1e57f6f51500a6"} Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.557940 4813 scope.go:117] "RemoveContainer" containerID="d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.581491 4813 scope.go:117] "RemoveContainer" containerID="4615a8ee8e91c187c4653e8c28a0bc9ff1603cb48d303069542e282bd550f9af" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.593721 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6bkfh"] Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.600560 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6bkfh"] Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.603074 4813 scope.go:117] "RemoveContainer" containerID="d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0" Nov 25 10:53:11 crc kubenswrapper[4813]: E1125 10:53:11.603531 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0\": container with ID starting with d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0 not found: ID does not exist" containerID="d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.603582 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0"} err="failed to get container status \"d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0\": rpc error: code = NotFound desc = could not find container \"d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0\": container with ID starting with d266f540efb34a26e056028f65971d7fda1fa41d8ec89450215ed6baa2b8b4b0 not found: ID does not exist" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.603609 4813 scope.go:117] "RemoveContainer" containerID="4615a8ee8e91c187c4653e8c28a0bc9ff1603cb48d303069542e282bd550f9af" Nov 25 10:53:11 crc kubenswrapper[4813]: E1125 10:53:11.603922 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4615a8ee8e91c187c4653e8c28a0bc9ff1603cb48d303069542e282bd550f9af\": container with ID starting with 4615a8ee8e91c187c4653e8c28a0bc9ff1603cb48d303069542e282bd550f9af not found: ID does not exist" containerID="4615a8ee8e91c187c4653e8c28a0bc9ff1603cb48d303069542e282bd550f9af" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.603977 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4615a8ee8e91c187c4653e8c28a0bc9ff1603cb48d303069542e282bd550f9af"} err="failed to get container status \"4615a8ee8e91c187c4653e8c28a0bc9ff1603cb48d303069542e282bd550f9af\": rpc error: code = NotFound desc = could not find container \"4615a8ee8e91c187c4653e8c28a0bc9ff1603cb48d303069542e282bd550f9af\": container with ID starting with 4615a8ee8e91c187c4653e8c28a0bc9ff1603cb48d303069542e282bd550f9af not found: ID does not exist" Nov 25 10:53:11 crc kubenswrapper[4813]: I1125 10:53:11.640385 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe34d8fb-5b40-4191-8015-acb5ed8ea562" path="/var/lib/kubelet/pods/fe34d8fb-5b40-4191-8015-acb5ed8ea562/volumes" Nov 25 10:53:14 crc kubenswrapper[4813]: I1125 10:53:14.823117 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz_5722665a-1565-49c6-887f-4ed446b4efd4/util/0.log" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.107646 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz_5722665a-1565-49c6-887f-4ed446b4efd4/pull/0.log" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.139591 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz_5722665a-1565-49c6-887f-4ed446b4efd4/util/0.log" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.169636 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz_5722665a-1565-49c6-887f-4ed446b4efd4/pull/0.log" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.363524 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz_5722665a-1565-49c6-887f-4ed446b4efd4/pull/0.log" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.408450 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz_5722665a-1565-49c6-887f-4ed446b4efd4/extract/0.log" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.430118 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0db7ad644315a6604c792d09ae3ae2623e8b1b6f1f68951b50777854d7x5gz_5722665a-1565-49c6-887f-4ed446b4efd4/util/0.log" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.571062 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-4wff2_03c63a63-9a46-4bda-941b-8c5ba81a13fe/kube-rbac-proxy/0.log" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.612484 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-4wff2_03c63a63-9a46-4bda-941b-8c5ba81a13fe/manager/4.log" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.667163 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-4wff2_03c63a63-9a46-4bda-941b-8c5ba81a13fe/manager/3.log" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.764314 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-dvfd9_a650bdd3-2541-4b76-b5db-64273262bc06/kube-rbac-proxy/0.log" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.832964 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.837908 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-dvfd9_a650bdd3-2541-4b76-b5db-64273262bc06/manager/4.log" Nov 25 10:53:15 crc kubenswrapper[4813]: I1125 10:53:15.956126 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-dvfd9_a650bdd3-2541-4b76-b5db-64273262bc06/manager/3.log" Nov 25 10:53:16 crc kubenswrapper[4813]: I1125 10:53:16.046535 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hjqzd_aa2934d9-d547-49d0-9d06-232120b44fa1/kube-rbac-proxy/0.log" Nov 25 10:53:16 crc kubenswrapper[4813]: I1125 10:53:16.364929 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hjqzd_aa2934d9-d547-49d0-9d06-232120b44fa1/manager/4.log" Nov 25 10:53:16 crc kubenswrapper[4813]: I1125 10:53:16.412517 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-547cf68667-6v6dd_71c5bfc5-a289-4942-bc55-819f06787eb6/kube-rbac-proxy/0.log" Nov 25 10:53:16 crc kubenswrapper[4813]: I1125 10:53:16.416815 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hjqzd_aa2934d9-d547-49d0-9d06-232120b44fa1/manager/3.log" Nov 25 10:53:16 crc kubenswrapper[4813]: I1125 10:53:16.583050 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-547cf68667-6v6dd_71c5bfc5-a289-4942-bc55-819f06787eb6/manager/3.log" Nov 25 10:53:16 crc kubenswrapper[4813]: I1125 10:53:16.637654 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-547cf68667-6v6dd_71c5bfc5-a289-4942-bc55-819f06787eb6/manager/4.log" Nov 25 10:53:16 crc kubenswrapper[4813]: I1125 10:53:16.700465 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-f6dvp_eaf6f1c0-6585-4eba-8baf-942ed2503735/kube-rbac-proxy/0.log" Nov 25 10:53:16 crc kubenswrapper[4813]: I1125 10:53:16.869838 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:16 crc kubenswrapper[4813]: I1125 10:53:16.869926 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:16 crc kubenswrapper[4813]: I1125 10:53:16.999222 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-f6dvp_eaf6f1c0-6585-4eba-8baf-942ed2503735/manager/4.log" Nov 25 10:53:17 crc kubenswrapper[4813]: I1125 10:53:17.046189 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-f6dvp_eaf6f1c0-6585-4eba-8baf-942ed2503735/manager/3.log" Nov 25 10:53:17 crc kubenswrapper[4813]: I1125 10:53:17.167248 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-8spkk_af18e07e-95b3-476f-9604-824c36ae74a5/kube-rbac-proxy/0.log" Nov 25 10:53:17 crc kubenswrapper[4813]: I1125 10:53:17.208990 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-8spkk_af18e07e-95b3-476f-9604-824c36ae74a5/manager/4.log" Nov 25 10:53:17 crc kubenswrapper[4813]: I1125 10:53:17.275090 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-8spkk_af18e07e-95b3-476f-9604-824c36ae74a5/manager/3.log" Nov 25 10:53:17 crc kubenswrapper[4813]: I1125 10:53:17.487322 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858778c9dc-fs9sm_06c81a1e-0461-4457-85ea-1a4060423eda/kube-rbac-proxy/0.log" Nov 25 10:53:17 crc kubenswrapper[4813]: I1125 10:53:17.491388 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858778c9dc-fs9sm_06c81a1e-0461-4457-85ea-1a4060423eda/manager/4.log" Nov 25 10:53:17 crc kubenswrapper[4813]: I1125 10:53:17.652986 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858778c9dc-fs9sm_06c81a1e-0461-4457-85ea-1a4060423eda/manager/3.log" Nov 25 10:53:17 crc kubenswrapper[4813]: I1125 10:53:17.803981 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-blrjt_d4a62556-e6e8-42dc-b7e4-180c40611393/kube-rbac-proxy/0.log" Nov 25 10:53:17 crc kubenswrapper[4813]: I1125 10:53:17.819991 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-blrjt_d4a62556-e6e8-42dc-b7e4-180c40611393/manager/4.log" Nov 25 10:53:17 crc kubenswrapper[4813]: I1125 10:53:17.909010 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-blrjt_d4a62556-e6e8-42dc-b7e4-180c40611393/manager/3.log" Nov 25 10:53:17 crc kubenswrapper[4813]: I1125 10:53:17.990217 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-76j46_7921584b-8ce0-45b8-8a56-ab0fdde43582/kube-rbac-proxy/0.log" Nov 25 10:53:18 crc kubenswrapper[4813]: I1125 10:53:18.084916 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-76j46_7921584b-8ce0-45b8-8a56-ab0fdde43582/manager/4.log" Nov 25 10:53:18 crc kubenswrapper[4813]: I1125 10:53:18.722319 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-jcjzx_efca9205-8a59-45ce-8c50-36b0d0389f12/kube-rbac-proxy/0.log" Nov 25 10:53:18 crc kubenswrapper[4813]: I1125 10:53:18.722720 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-76j46_7921584b-8ce0-45b8-8a56-ab0fdde43582/manager/3.log" Nov 25 10:53:18 crc kubenswrapper[4813]: I1125 10:53:18.722888 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-jcjzx_efca9205-8a59-45ce-8c50-36b0d0389f12/manager/4.log" Nov 25 10:53:18 crc kubenswrapper[4813]: I1125 10:53:18.869635 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:18 crc kubenswrapper[4813]: I1125 10:53:18.870389 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:19 crc kubenswrapper[4813]: I1125 10:53:19.040187 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-jcjzx_efca9205-8a59-45ce-8c50-36b0d0389f12/manager/3.log" Nov 25 10:53:19 crc kubenswrapper[4813]: I1125 10:53:19.074163 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_baf6f7bb-db50-4013-8b77-2b7e4c8101c2/kube-rbac-proxy/0.log" Nov 25 10:53:19 crc kubenswrapper[4813]: I1125 10:53:19.106632 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_baf6f7bb-db50-4013-8b77-2b7e4c8101c2/manager/4.log" Nov 25 10:53:19 crc kubenswrapper[4813]: I1125 10:53:19.191144 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-5ldjd_baf6f7bb-db50-4013-8b77-2b7e4c8101c2/manager/3.log" Nov 25 10:53:19 crc kubenswrapper[4813]: I1125 10:53:19.328091 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-c6kw6_b69526d6-6616-4536-a228-4cdb57e1881c/kube-rbac-proxy/0.log" Nov 25 10:53:19 crc kubenswrapper[4813]: I1125 10:53:19.338085 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-c6kw6_b69526d6-6616-4536-a228-4cdb57e1881c/manager/4.log" Nov 25 10:53:19 crc kubenswrapper[4813]: I1125 10:53:19.464983 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-c6kw6_b69526d6-6616-4536-a228-4cdb57e1881c/manager/3.log" Nov 25 10:53:19 crc kubenswrapper[4813]: I1125 10:53:19.612950 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-6j272_9374bbb0-b458-4c1c-a327-67bcbea83045/manager/4.log" Nov 25 10:53:19 crc kubenswrapper[4813]: I1125 10:53:19.613243 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-6j272_9374bbb0-b458-4c1c-a327-67bcbea83045/kube-rbac-proxy/0.log" Nov 25 10:53:19 crc kubenswrapper[4813]: I1125 10:53:19.854739 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-6j272_9374bbb0-b458-4c1c-a327-67bcbea83045/manager/3.log" Nov 25 10:53:19 crc kubenswrapper[4813]: I1125 10:53:19.984885 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-gjs27_a31ffbb8-0255-45d6-9125-6cccc7b444ba/kube-rbac-proxy/0.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.030559 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-gjs27_a31ffbb8-0255-45d6-9125-6cccc7b444ba/manager/4.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.160074 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-gjs27_a31ffbb8-0255-45d6-9125-6cccc7b444ba/manager/3.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.182002 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-v2clw_0a946ff2-f2e3-48c2-ae3b-774a4ea85492/kube-rbac-proxy/0.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.220945 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-v2clw_0a946ff2-f2e3-48c2-ae3b-774a4ea85492/manager/1.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.263190 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-v2clw_0a946ff2-f2e3-48c2-ae3b-774a4ea85492/manager/0.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.406887 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5ffc8f797b-hbwwd_09bd1800-0aaa-4908-ac58-e0890a2a309f/manager/4.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.412246 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5ffc8f797b-hbwwd_09bd1800-0aaa-4908-ac58-e0890a2a309f/manager/3.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.529452 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-577fbd7764-z9m8h_32603f59-2392-4c3e-9d25-ba1fe7376687/operator/1.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.687615 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-577fbd7764-z9m8h_32603f59-2392-4c3e-9d25-ba1fe7376687/operator/0.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.707706 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nkcj2_da5dd33d-c08b-45ba-af6c-86748ecaf7b0/registry-server/0.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.769181 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-tc2mg_db556642-a360-4559-8cde-7c25d7a893e0/kube-rbac-proxy/0.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.913621 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-tc2mg_db556642-a360-4559-8cde-7c25d7a893e0/manager/4.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.934353 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-2d2x7_9093a664-86f3-4349-bd13-0a5e4aca8036/kube-rbac-proxy/0.log" Nov 25 10:53:20 crc kubenswrapper[4813]: I1125 10:53:20.958364 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-tc2mg_db556642-a360-4559-8cde-7c25d7a893e0/manager/3.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.037713 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-2d2x7_9093a664-86f3-4349-bd13-0a5e4aca8036/manager/4.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.108305 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-2d2x7_9093a664-86f3-4349-bd13-0a5e4aca8036/manager/3.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.152455 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qd4tx_2bf03402-32ec-423d-a6af-657bc0cfeb15/operator/4.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.173328 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qd4tx_2bf03402-32ec-423d-a6af-657bc0cfeb15/operator/3.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.333953 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-fjkzd_94c3d2b4-f1bb-402d-a39d-78e16bee970b/manager/4.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.343553 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-fjkzd_94c3d2b4-f1bb-402d-a39d-78e16bee970b/manager/3.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.365695 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-fjkzd_94c3d2b4-f1bb-402d-a39d-78e16bee970b/kube-rbac-proxy/0.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.533800 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-qplf9_5f9254c7-c8dc-4504-bdf5-264c78e03b0c/kube-rbac-proxy/0.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.563400 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-qplf9_5f9254c7-c8dc-4504-bdf5-264c78e03b0c/manager/4.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.597315 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-qplf9_5f9254c7-c8dc-4504-bdf5-264c78e03b0c/manager/3.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.650440 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-cwrzw_49b29226-49bf-4d59-9c7f-998d924bdace/kube-rbac-proxy/0.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.740830 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-cwrzw_49b29226-49bf-4d59-9c7f-998d924bdace/manager/0.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.741727 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-cwrzw_49b29226-49bf-4d59-9c7f-998d924bdace/manager/1.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.798804 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-bpbjt_48ea1018-a88f-4ef0-a82f-7e3b012522ec/kube-rbac-proxy/0.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.844141 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-bpbjt_48ea1018-a88f-4ef0-a82f-7e3b012522ec/manager/4.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.949616 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-bpbjt_48ea1018-a88f-4ef0-a82f-7e3b012522ec/manager/3.log" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.967310 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.967365 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.967505 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.968238 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa994bf2afc77b306a9a9dd90fad6893b4b3c7e60546773c6c8bfb41dfb47486"} pod="openshift-machine-config-operator/machine-config-daemon-knhz8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:53:21 crc kubenswrapper[4813]: I1125 10:53:21.968296 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" containerID="cri-o://aa994bf2afc77b306a9a9dd90fad6893b4b3c7e60546773c6c8bfb41dfb47486" gracePeriod=600 Nov 25 10:53:22 crc kubenswrapper[4813]: I1125 10:53:22.773403 4813 generic.go:334] "Generic (PLEG): container finished" podID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerID="aa994bf2afc77b306a9a9dd90fad6893b4b3c7e60546773c6c8bfb41dfb47486" exitCode=0 Nov 25 10:53:22 crc kubenswrapper[4813]: I1125 10:53:22.773491 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerDied","Data":"aa994bf2afc77b306a9a9dd90fad6893b4b3c7e60546773c6c8bfb41dfb47486"} Nov 25 10:53:22 crc kubenswrapper[4813]: I1125 10:53:22.773810 4813 scope.go:117] "RemoveContainer" containerID="efbe54cb2ef6c89c7fb03c162ec904d1deff9a1b48f07c1332fb33b84a4f4c6c" Nov 25 10:53:23 crc kubenswrapper[4813]: I1125 10:53:23.788264 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerStarted","Data":"d35cff04017f1dbbd2dc074e7ec64c0bf7970af72edd947e4aaa4314840c882a"} Nov 25 10:53:26 crc kubenswrapper[4813]: I1125 10:53:26.869327 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:26 crc kubenswrapper[4813]: I1125 10:53:26.869343 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:26 crc kubenswrapper[4813]: I1125 10:53:26.870071 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:53:26 crc kubenswrapper[4813]: I1125 10:53:26.871114 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"b16a26f36fccfcb8b61af7eaa248906d6db217d5a30d65751b58e03481a6507e"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:53:27 crc kubenswrapper[4813]: I1125 10:53:27.110185 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" containerID="cri-o://b16a26f36fccfcb8b61af7eaa248906d6db217d5a30d65751b58e03481a6507e" gracePeriod=30 Nov 25 10:53:27 crc kubenswrapper[4813]: I1125 10:53:27.823078 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output=< Nov 25 10:53:27 crc kubenswrapper[4813]: WARNING: password retrieved from cluster failed authentication Nov 25 10:53:27 crc kubenswrapper[4813]: > Nov 25 10:53:28 crc kubenswrapper[4813]: I1125 10:53:28.838068 4813 generic.go:334] "Generic (PLEG): container finished" podID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerID="b16a26f36fccfcb8b61af7eaa248906d6db217d5a30d65751b58e03481a6507e" exitCode=0 Nov 25 10:53:28 crc kubenswrapper[4813]: I1125 10:53:28.838169 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerDied","Data":"b16a26f36fccfcb8b61af7eaa248906d6db217d5a30d65751b58e03481a6507e"} Nov 25 10:53:28 crc kubenswrapper[4813]: I1125 10:53:28.838443 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerStarted","Data":"4ddbb2e7aae14ce2438409cffc3d4b2cc015e0f5b04258c050e4393a6a044ddb"} Nov 25 10:53:28 crc kubenswrapper[4813]: I1125 10:53:28.869776 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:28 crc kubenswrapper[4813]: I1125 10:53:28.869907 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:53:28 crc kubenswrapper[4813]: I1125 10:53:28.870708 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"b93750874c71ca3d1d7d50f1fb30894eba5c684143e2d04b019e83cdce65e424"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:53:28 crc kubenswrapper[4813]: I1125 10:53:28.871495 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:29 crc kubenswrapper[4813]: I1125 10:53:29.256781 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" containerID="cri-o://b93750874c71ca3d1d7d50f1fb30894eba5c684143e2d04b019e83cdce65e424" gracePeriod=30 Nov 25 10:53:29 crc kubenswrapper[4813]: I1125 10:53:29.827478 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output=< Nov 25 10:53:29 crc kubenswrapper[4813]: WARNING: password retrieved from cluster failed authentication Nov 25 10:53:29 crc kubenswrapper[4813]: > Nov 25 10:53:30 crc kubenswrapper[4813]: I1125 10:53:30.859132 4813 generic.go:334] "Generic (PLEG): container finished" podID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerID="b93750874c71ca3d1d7d50f1fb30894eba5c684143e2d04b019e83cdce65e424" exitCode=0 Nov 25 10:53:30 crc kubenswrapper[4813]: I1125 10:53:30.859241 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerDied","Data":"b93750874c71ca3d1d7d50f1fb30894eba5c684143e2d04b019e83cdce65e424"} Nov 25 10:53:30 crc kubenswrapper[4813]: I1125 10:53:30.859788 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerStarted","Data":"a0282e9d6adda4400290b979af7557f52adf84f954307c5aa31c09b51514f6b1"} Nov 25 10:53:36 crc kubenswrapper[4813]: I1125 10:53:36.017775 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:53:36 crc kubenswrapper[4813]: I1125 10:53:36.018355 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 10:53:36 crc kubenswrapper[4813]: I1125 10:53:36.188145 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 10:53:37 crc kubenswrapper[4813]: I1125 10:53:37.465974 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:53:37 crc kubenswrapper[4813]: I1125 10:53:37.466298 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 10:53:37 crc kubenswrapper[4813]: I1125 10:53:37.634065 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 10:53:37 crc kubenswrapper[4813]: I1125 10:53:37.869164 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:38 crc kubenswrapper[4813]: I1125 10:53:38.869992 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:40 crc kubenswrapper[4813]: I1125 10:53:40.764744 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vn7cb_dc9b8a2f-2bce-43c9-a8c5-1bf29d7d5964/control-plane-machine-set-operator/0.log" Nov 25 10:53:40 crc kubenswrapper[4813]: I1125 10:53:40.944390 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-48zrm_616a1226-9627-43a9-a1a7-5dfb4cf863d8/kube-rbac-proxy/0.log" Nov 25 10:53:40 crc kubenswrapper[4813]: I1125 10:53:40.961936 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-48zrm_616a1226-9627-43a9-a1a7-5dfb4cf863d8/machine-api-operator/0.log" Nov 25 10:53:46 crc kubenswrapper[4813]: I1125 10:53:46.869611 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:46 crc kubenswrapper[4813]: I1125 10:53:46.869856 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:48 crc kubenswrapper[4813]: I1125 10:53:48.870500 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:48 crc kubenswrapper[4813]: I1125 10:53:48.871031 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:53 crc kubenswrapper[4813]: I1125 10:53:53.276634 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-ds4rg_ee2b9b30-2c9f-4c88-b31b-a20957e03939/cert-manager-controller/0.log" Nov 25 10:53:53 crc kubenswrapper[4813]: I1125 10:53:53.299284 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-ds4rg_ee2b9b30-2c9f-4c88-b31b-a20957e03939/cert-manager-controller/1.log" Nov 25 10:53:53 crc kubenswrapper[4813]: I1125 10:53:53.462558 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-9bjpb_396645a8-bd9a-429a-8d95-33dcec24c4ba/cert-manager-cainjector/3.log" Nov 25 10:53:53 crc kubenswrapper[4813]: I1125 10:53:53.491343 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-9bjpb_396645a8-bd9a-429a-8d95-33dcec24c4ba/cert-manager-cainjector/2.log" Nov 25 10:53:53 crc kubenswrapper[4813]: I1125 10:53:53.612568 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-74f7x_d25f3a31-9925-4bbb-959f-be2a544fca3a/cert-manager-webhook/0.log" Nov 25 10:53:56 crc kubenswrapper[4813]: I1125 10:53:56.869343 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:56 crc kubenswrapper[4813]: I1125 10:53:56.869369 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:58 crc kubenswrapper[4813]: I1125 10:53:58.873434 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:53:58 crc kubenswrapper[4813]: I1125 10:53:58.879223 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:05 crc kubenswrapper[4813]: I1125 10:54:05.069300 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-k5lrd_d4f77244-1065-4f49-9ab3-23f0fb4e24c9/nmstate-console-plugin/0.log" Nov 25 10:54:05 crc kubenswrapper[4813]: I1125 10:54:05.268018 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8b4xz_63cd6170-52aa-4bfb-8376-0b7a8da3f64e/nmstate-handler/0.log" Nov 25 10:54:05 crc kubenswrapper[4813]: I1125 10:54:05.280316 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-v5l6z_4c2eda27-6d33-43b0-847a-7da2f657251e/nmstate-metrics/0.log" Nov 25 10:54:05 crc kubenswrapper[4813]: I1125 10:54:05.336233 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-v5l6z_4c2eda27-6d33-43b0-847a-7da2f657251e/kube-rbac-proxy/0.log" Nov 25 10:54:05 crc kubenswrapper[4813]: I1125 10:54:05.495233 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-nnkm5_55e83365-d0ce-4274-a5f5-ee89147342bf/nmstate-operator/0.log" Nov 25 10:54:05 crc kubenswrapper[4813]: I1125 10:54:05.529848 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-92nqv_98566240-794d-4769-8ca4-7e92f2e158cf/nmstate-webhook/0.log" Nov 25 10:54:06 crc kubenswrapper[4813]: I1125 10:54:06.870847 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:06 crc kubenswrapper[4813]: I1125 10:54:06.871215 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:54:06 crc kubenswrapper[4813]: I1125 10:54:06.871247 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:06 crc kubenswrapper[4813]: I1125 10:54:06.871775 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"4ddbb2e7aae14ce2438409cffc3d4b2cc015e0f5b04258c050e4393a6a044ddb"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:54:07 crc kubenswrapper[4813]: I1125 10:54:07.109813 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" containerID="cri-o://4ddbb2e7aae14ce2438409cffc3d4b2cc015e0f5b04258c050e4393a6a044ddb" gracePeriod=30 Nov 25 10:54:07 crc kubenswrapper[4813]: I1125 10:54:07.733658 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output=< Nov 25 10:54:07 crc kubenswrapper[4813]: WARNING: password retrieved from cluster failed authentication Nov 25 10:54:07 crc kubenswrapper[4813]: > Nov 25 10:54:08 crc kubenswrapper[4813]: I1125 10:54:08.143942 4813 generic.go:334] "Generic (PLEG): container finished" podID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerID="4ddbb2e7aae14ce2438409cffc3d4b2cc015e0f5b04258c050e4393a6a044ddb" exitCode=0 Nov 25 10:54:08 crc kubenswrapper[4813]: I1125 10:54:08.144053 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerDied","Data":"4ddbb2e7aae14ce2438409cffc3d4b2cc015e0f5b04258c050e4393a6a044ddb"} Nov 25 10:54:08 crc kubenswrapper[4813]: I1125 10:54:08.144345 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerStarted","Data":"ddfa60bfa8fd381a916836614d060ee6460f4aefaf155d20b4cc9b7d222ecb37"} Nov 25 10:54:08 crc kubenswrapper[4813]: I1125 10:54:08.144369 4813 scope.go:117] "RemoveContainer" containerID="b16a26f36fccfcb8b61af7eaa248906d6db217d5a30d65751b58e03481a6507e" Nov 25 10:54:08 crc kubenswrapper[4813]: I1125 10:54:08.869079 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:08 crc kubenswrapper[4813]: I1125 10:54:08.869155 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:54:08 crc kubenswrapper[4813]: I1125 10:54:08.869862 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:08 crc kubenswrapper[4813]: I1125 10:54:08.869892 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"a0282e9d6adda4400290b979af7557f52adf84f954307c5aa31c09b51514f6b1"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:54:09 crc kubenswrapper[4813]: I1125 10:54:09.071023 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" containerID="cri-o://a0282e9d6adda4400290b979af7557f52adf84f954307c5aa31c09b51514f6b1" gracePeriod=30 Nov 25 10:54:09 crc kubenswrapper[4813]: I1125 10:54:09.869381 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:10 crc kubenswrapper[4813]: I1125 10:54:10.165286 4813 generic.go:334] "Generic (PLEG): container finished" podID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerID="a0282e9d6adda4400290b979af7557f52adf84f954307c5aa31c09b51514f6b1" exitCode=0 Nov 25 10:54:10 crc kubenswrapper[4813]: I1125 10:54:10.165330 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerDied","Data":"a0282e9d6adda4400290b979af7557f52adf84f954307c5aa31c09b51514f6b1"} Nov 25 10:54:10 crc kubenswrapper[4813]: I1125 10:54:10.165366 4813 scope.go:117] "RemoveContainer" containerID="b93750874c71ca3d1d7d50f1fb30894eba5c684143e2d04b019e83cdce65e424" Nov 25 10:54:11 crc kubenswrapper[4813]: I1125 10:54:11.176250 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerStarted","Data":"ba754884ad309d450a0aa3d538b1abaf3712fcbb78f29d0de1c6d9b4de5a090c"} Nov 25 10:54:16 crc kubenswrapper[4813]: I1125 10:54:16.017570 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 10:54:16 crc kubenswrapper[4813]: I1125 10:54:16.017925 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:54:16 crc kubenswrapper[4813]: I1125 10:54:16.198289 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 10:54:17 crc kubenswrapper[4813]: I1125 10:54:17.466860 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:54:17 crc kubenswrapper[4813]: I1125 10:54:17.466915 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 10:54:17 crc kubenswrapper[4813]: I1125 10:54:17.645948 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 10:54:17 crc kubenswrapper[4813]: I1125 10:54:17.869805 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:19 crc kubenswrapper[4813]: I1125 10:54:19.870110 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:20 crc kubenswrapper[4813]: I1125 10:54:20.116660 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-brpp6_269e623a-f673-45c5-8377-29b4d98a8778/kube-rbac-proxy/0.log" Nov 25 10:54:20 crc kubenswrapper[4813]: I1125 10:54:20.299312 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-4mmlv_49ae2f6f-61f5-4577-ad9f-cce3678795ef/frr-k8s-webhook-server/0.log" Nov 25 10:54:20 crc kubenswrapper[4813]: I1125 10:54:20.599648 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-brpp6_269e623a-f673-45c5-8377-29b4d98a8778/controller/0.log" Nov 25 10:54:20 crc kubenswrapper[4813]: I1125 10:54:20.673430 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/cp-frr-files/0.log" Nov 25 10:54:20 crc kubenswrapper[4813]: I1125 10:54:20.864477 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/cp-frr-files/0.log" Nov 25 10:54:20 crc kubenswrapper[4813]: I1125 10:54:20.874717 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/cp-reloader/0.log" Nov 25 10:54:20 crc kubenswrapper[4813]: I1125 10:54:20.898165 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/cp-reloader/0.log" Nov 25 10:54:20 crc kubenswrapper[4813]: I1125 10:54:20.923049 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/cp-metrics/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.061014 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/cp-frr-files/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.106860 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/cp-metrics/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.138979 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/cp-metrics/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.142332 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/cp-reloader/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.320905 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/cp-reloader/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.321141 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/cp-frr-files/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.335731 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/cp-metrics/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.358159 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/controller/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.485056 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/frr-metrics/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.536919 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/kube-rbac-proxy/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.587444 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/kube-rbac-proxy-frr/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.762690 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b84b955f5-mmrm7_a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b/manager/4.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.789903 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/reloader/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.895832 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z9zl6_851ec932-482a-43c0-a100-ee8378bb527e/frr/0.log" Nov 25 10:54:21 crc kubenswrapper[4813]: I1125 10:54:21.996288 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b84b955f5-mmrm7_a6eb0ffd-2e55-4d5a-9ac7-19b25ba6ec8b/manager/3.log" Nov 25 10:54:22 crc kubenswrapper[4813]: I1125 10:54:22.003651 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-546d569f67-5bbtt_1679876e-16fe-4437-a0d5-05f978057c2d/webhook-server/0.log" Nov 25 10:54:22 crc kubenswrapper[4813]: I1125 10:54:22.215877 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gwnrv_724aef58-6386-4f8e-bfaf-231b5dfcea9b/kube-rbac-proxy/0.log" Nov 25 10:54:22 crc kubenswrapper[4813]: I1125 10:54:22.412526 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gwnrv_724aef58-6386-4f8e-bfaf-231b5dfcea9b/speaker/0.log" Nov 25 10:54:26 crc kubenswrapper[4813]: I1125 10:54:26.870806 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:26 crc kubenswrapper[4813]: I1125 10:54:26.871041 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:28 crc kubenswrapper[4813]: I1125 10:54:28.870899 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:28 crc kubenswrapper[4813]: I1125 10:54:28.870918 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:36 crc kubenswrapper[4813]: I1125 10:54:36.869860 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:36 crc kubenswrapper[4813]: I1125 10:54:36.870203 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:38 crc kubenswrapper[4813]: I1125 10:54:38.868869 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:38 crc kubenswrapper[4813]: I1125 10:54:38.869884 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:46 crc kubenswrapper[4813]: I1125 10:54:46.870442 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:46 crc kubenswrapper[4813]: I1125 10:54:46.871840 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:46 crc kubenswrapper[4813]: I1125 10:54:46.871963 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:54:46 crc kubenswrapper[4813]: I1125 10:54:46.873212 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"ddfa60bfa8fd381a916836614d060ee6460f4aefaf155d20b4cc9b7d222ecb37"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:54:47 crc kubenswrapper[4813]: I1125 10:54:47.115360 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" containerID="cri-o://ddfa60bfa8fd381a916836614d060ee6460f4aefaf155d20b4cc9b7d222ecb37" gracePeriod=30 Nov 25 10:54:47 crc kubenswrapper[4813]: I1125 10:54:47.503845 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output=< Nov 25 10:54:47 crc kubenswrapper[4813]: WARNING: password retrieved from cluster failed authentication Nov 25 10:54:47 crc kubenswrapper[4813]: > Nov 25 10:54:47 crc kubenswrapper[4813]: I1125 10:54:47.725108 4813 generic.go:334] "Generic (PLEG): container finished" podID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerID="ddfa60bfa8fd381a916836614d060ee6460f4aefaf155d20b4cc9b7d222ecb37" exitCode=0 Nov 25 10:54:47 crc kubenswrapper[4813]: I1125 10:54:47.725186 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerDied","Data":"ddfa60bfa8fd381a916836614d060ee6460f4aefaf155d20b4cc9b7d222ecb37"} Nov 25 10:54:47 crc kubenswrapper[4813]: I1125 10:54:47.725767 4813 scope.go:117] "RemoveContainer" containerID="4ddbb2e7aae14ce2438409cffc3d4b2cc015e0f5b04258c050e4393a6a044ddb" Nov 25 10:54:48 crc kubenswrapper[4813]: I1125 10:54:48.735629 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerStarted","Data":"8fe3b5908bd882a43edcc879c4097ad5b7aa6d5ecfb8246545819a3dfe3577df"} Nov 25 10:54:48 crc kubenswrapper[4813]: I1125 10:54:48.870010 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:48 crc kubenswrapper[4813]: I1125 10:54:48.870628 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:48 crc kubenswrapper[4813]: I1125 10:54:48.870760 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:54:48 crc kubenswrapper[4813]: I1125 10:54:48.885395 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"ba754884ad309d450a0aa3d538b1abaf3712fcbb78f29d0de1c6d9b4de5a090c"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:54:49 crc kubenswrapper[4813]: I1125 10:54:49.074988 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" containerID="cri-o://ba754884ad309d450a0aa3d538b1abaf3712fcbb78f29d0de1c6d9b4de5a090c" gracePeriod=30 Nov 25 10:54:49 crc kubenswrapper[4813]: I1125 10:54:49.425321 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output=< Nov 25 10:54:49 crc kubenswrapper[4813]: WARNING: password retrieved from cluster failed authentication Nov 25 10:54:49 crc kubenswrapper[4813]: > Nov 25 10:54:49 crc kubenswrapper[4813]: I1125 10:54:49.757830 4813 generic.go:334] "Generic (PLEG): container finished" podID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerID="ba754884ad309d450a0aa3d538b1abaf3712fcbb78f29d0de1c6d9b4de5a090c" exitCode=0 Nov 25 10:54:49 crc kubenswrapper[4813]: I1125 10:54:49.757908 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerDied","Data":"ba754884ad309d450a0aa3d538b1abaf3712fcbb78f29d0de1c6d9b4de5a090c"} Nov 25 10:54:49 crc kubenswrapper[4813]: I1125 10:54:49.758290 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerStarted","Data":"80908b657a821f229fbc82bb8d2cb2e101a613cef69d3be9f8711253527ccbd9"} Nov 25 10:54:49 crc kubenswrapper[4813]: I1125 10:54:49.758311 4813 scope.go:117] "RemoveContainer" containerID="a0282e9d6adda4400290b979af7557f52adf84f954307c5aa31c09b51514f6b1" Nov 25 10:54:50 crc kubenswrapper[4813]: E1125 10:54:50.146215 4813 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.91:52694->38.129.56.91:42617: write tcp 192.168.126.11:10250->192.168.126.11:44644: write: connection reset by peer Nov 25 10:54:50 crc kubenswrapper[4813]: E1125 10:54:50.194974 4813 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.91:52706->38.129.56.91:42617: write tcp 38.129.56.91:52706->38.129.56.91:42617: write: broken pipe Nov 25 10:54:56 crc kubenswrapper[4813]: I1125 10:54:56.018119 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:54:56 crc kubenswrapper[4813]: I1125 10:54:56.019828 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 10:54:56 crc kubenswrapper[4813]: I1125 10:54:56.195238 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 10:54:57 crc kubenswrapper[4813]: I1125 10:54:57.465741 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:54:57 crc kubenswrapper[4813]: I1125 10:54:57.466096 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 10:54:57 crc kubenswrapper[4813]: I1125 10:54:57.643622 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 10:54:57 crc kubenswrapper[4813]: I1125 10:54:57.870263 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:58 crc kubenswrapper[4813]: I1125 10:54:58.869834 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:54:58 crc kubenswrapper[4813]: I1125 10:54:58.870236 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:06 crc kubenswrapper[4813]: I1125 10:55:06.870071 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:06 crc kubenswrapper[4813]: I1125 10:55:06.870892 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:08 crc kubenswrapper[4813]: I1125 10:55:08.870328 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:08 crc kubenswrapper[4813]: I1125 10:55:08.870413 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:16 crc kubenswrapper[4813]: I1125 10:55:16.870261 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:16 crc kubenswrapper[4813]: I1125 10:55:16.870261 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:18 crc kubenswrapper[4813]: I1125 10:55:18.868933 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:18 crc kubenswrapper[4813]: I1125 10:55:18.871410 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:24 crc kubenswrapper[4813]: I1125 10:55:24.118944 4813 generic.go:334] "Generic (PLEG): container finished" podID="90d80d33-b519-4d67-97ba-1b8b828e917b" containerID="798caaa475c1034e2ef39591a630f1bb0b528da32dd1cf7acc1726a570970c5c" exitCode=0 Nov 25 10:55:24 crc kubenswrapper[4813]: I1125 10:55:24.119062 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" event={"ID":"90d80d33-b519-4d67-97ba-1b8b828e917b","Type":"ContainerDied","Data":"798caaa475c1034e2ef39591a630f1bb0b528da32dd1cf7acc1726a570970c5c"} Nov 25 10:55:24 crc kubenswrapper[4813]: I1125 10:55:24.119809 4813 scope.go:117] "RemoveContainer" containerID="798caaa475c1034e2ef39591a630f1bb0b528da32dd1cf7acc1726a570970c5c" Nov 25 10:55:24 crc kubenswrapper[4813]: I1125 10:55:24.569719 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cdcjj_must-gather-5vk8l_90d80d33-b519-4d67-97ba-1b8b828e917b/gather/0.log" Nov 25 10:55:26 crc kubenswrapper[4813]: I1125 10:55:26.869753 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:26 crc kubenswrapper[4813]: I1125 10:55:26.870602 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:26 crc kubenswrapper[4813]: I1125 10:55:26.870774 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:55:26 crc kubenswrapper[4813]: I1125 10:55:26.872004 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"8fe3b5908bd882a43edcc879c4097ad5b7aa6d5ecfb8246545819a3dfe3577df"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:55:27 crc kubenswrapper[4813]: I1125 10:55:27.071076 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" containerID="cri-o://8fe3b5908bd882a43edcc879c4097ad5b7aa6d5ecfb8246545819a3dfe3577df" gracePeriod=30 Nov 25 10:55:27 crc kubenswrapper[4813]: I1125 10:55:27.713008 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output=< Nov 25 10:55:27 crc kubenswrapper[4813]: WARNING: password retrieved from cluster failed authentication Nov 25 10:55:27 crc kubenswrapper[4813]: > Nov 25 10:55:28 crc kubenswrapper[4813]: I1125 10:55:28.152426 4813 generic.go:334] "Generic (PLEG): container finished" podID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerID="8fe3b5908bd882a43edcc879c4097ad5b7aa6d5ecfb8246545819a3dfe3577df" exitCode=0 Nov 25 10:55:28 crc kubenswrapper[4813]: I1125 10:55:28.152521 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerDied","Data":"8fe3b5908bd882a43edcc879c4097ad5b7aa6d5ecfb8246545819a3dfe3577df"} Nov 25 10:55:28 crc kubenswrapper[4813]: I1125 10:55:28.153479 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerStarted","Data":"1efb4c9781f1f4234bf85bc45997f3798dfacde25a7cc77afe7b93185efcee63"} Nov 25 10:55:28 crc kubenswrapper[4813]: I1125 10:55:28.153534 4813 scope.go:117] "RemoveContainer" containerID="ddfa60bfa8fd381a916836614d060ee6460f4aefaf155d20b4cc9b7d222ecb37" Nov 25 10:55:28 crc kubenswrapper[4813]: I1125 10:55:28.870391 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:28 crc kubenswrapper[4813]: I1125 10:55:28.870805 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:28 crc kubenswrapper[4813]: I1125 10:55:28.870848 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:55:28 crc kubenswrapper[4813]: I1125 10:55:28.871508 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"80908b657a821f229fbc82bb8d2cb2e101a613cef69d3be9f8711253527ccbd9"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:55:29 crc kubenswrapper[4813]: I1125 10:55:29.087223 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" containerID="cri-o://80908b657a821f229fbc82bb8d2cb2e101a613cef69d3be9f8711253527ccbd9" gracePeriod=30 Nov 25 10:55:29 crc kubenswrapper[4813]: E1125 10:55:29.725325 4813 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.91:36990->38.129.56.91:42617: read tcp 38.129.56.91:36990->38.129.56.91:42617: read: connection reset by peer Nov 25 10:55:29 crc kubenswrapper[4813]: I1125 10:55:29.868773 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:30 crc kubenswrapper[4813]: I1125 10:55:30.184075 4813 generic.go:334] "Generic (PLEG): container finished" podID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerID="80908b657a821f229fbc82bb8d2cb2e101a613cef69d3be9f8711253527ccbd9" exitCode=0 Nov 25 10:55:30 crc kubenswrapper[4813]: I1125 10:55:30.184171 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerDied","Data":"80908b657a821f229fbc82bb8d2cb2e101a613cef69d3be9f8711253527ccbd9"} Nov 25 10:55:30 crc kubenswrapper[4813]: I1125 10:55:30.185243 4813 scope.go:117] "RemoveContainer" containerID="ba754884ad309d450a0aa3d538b1abaf3712fcbb78f29d0de1c6d9b4de5a090c" Nov 25 10:55:30 crc kubenswrapper[4813]: I1125 10:55:30.959115 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cdcjj/must-gather-5vk8l"] Nov 25 10:55:30 crc kubenswrapper[4813]: I1125 10:55:30.959803 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" podUID="90d80d33-b519-4d67-97ba-1b8b828e917b" containerName="copy" containerID="cri-o://ba5c52abb0a377bea345a1da6bef0137070c0a6dfe996550bad543e2b5469636" gracePeriod=2 Nov 25 10:55:30 crc kubenswrapper[4813]: I1125 10:55:30.965919 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cdcjj/must-gather-5vk8l"] Nov 25 10:55:31 crc kubenswrapper[4813]: I1125 10:55:31.195648 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cdcjj_must-gather-5vk8l_90d80d33-b519-4d67-97ba-1b8b828e917b/copy/0.log" Nov 25 10:55:31 crc kubenswrapper[4813]: I1125 10:55:31.196094 4813 generic.go:334] "Generic (PLEG): container finished" podID="90d80d33-b519-4d67-97ba-1b8b828e917b" containerID="ba5c52abb0a377bea345a1da6bef0137070c0a6dfe996550bad543e2b5469636" exitCode=143 Nov 25 10:55:31 crc kubenswrapper[4813]: I1125 10:55:31.199476 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerStarted","Data":"0ce6e650b0e6f5717f39355ed96f5b66a51107fdd6cec7f90294b10781b58357"} Nov 25 10:55:31 crc kubenswrapper[4813]: I1125 10:55:31.502390 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cdcjj_must-gather-5vk8l_90d80d33-b519-4d67-97ba-1b8b828e917b/copy/0.log" Nov 25 10:55:31 crc kubenswrapper[4813]: I1125 10:55:31.503507 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:55:31 crc kubenswrapper[4813]: I1125 10:55:31.594099 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/90d80d33-b519-4d67-97ba-1b8b828e917b-must-gather-output\") pod \"90d80d33-b519-4d67-97ba-1b8b828e917b\" (UID: \"90d80d33-b519-4d67-97ba-1b8b828e917b\") " Nov 25 10:55:31 crc kubenswrapper[4813]: I1125 10:55:31.594182 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvtsf\" (UniqueName: \"kubernetes.io/projected/90d80d33-b519-4d67-97ba-1b8b828e917b-kube-api-access-xvtsf\") pod \"90d80d33-b519-4d67-97ba-1b8b828e917b\" (UID: \"90d80d33-b519-4d67-97ba-1b8b828e917b\") " Nov 25 10:55:31 crc kubenswrapper[4813]: I1125 10:55:31.600555 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d80d33-b519-4d67-97ba-1b8b828e917b-kube-api-access-xvtsf" (OuterVolumeSpecName: "kube-api-access-xvtsf") pod "90d80d33-b519-4d67-97ba-1b8b828e917b" (UID: "90d80d33-b519-4d67-97ba-1b8b828e917b"). InnerVolumeSpecName "kube-api-access-xvtsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:55:31 crc kubenswrapper[4813]: I1125 10:55:31.689326 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d80d33-b519-4d67-97ba-1b8b828e917b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "90d80d33-b519-4d67-97ba-1b8b828e917b" (UID: "90d80d33-b519-4d67-97ba-1b8b828e917b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:55:31 crc kubenswrapper[4813]: I1125 10:55:31.696195 4813 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/90d80d33-b519-4d67-97ba-1b8b828e917b-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 10:55:31 crc kubenswrapper[4813]: I1125 10:55:31.696226 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvtsf\" (UniqueName: \"kubernetes.io/projected/90d80d33-b519-4d67-97ba-1b8b828e917b-kube-api-access-xvtsf\") on node \"crc\" DevicePath \"\"" Nov 25 10:55:32 crc kubenswrapper[4813]: I1125 10:55:32.208375 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cdcjj_must-gather-5vk8l_90d80d33-b519-4d67-97ba-1b8b828e917b/copy/0.log" Nov 25 10:55:32 crc kubenswrapper[4813]: I1125 10:55:32.209581 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdcjj/must-gather-5vk8l" Nov 25 10:55:32 crc kubenswrapper[4813]: I1125 10:55:32.210977 4813 scope.go:117] "RemoveContainer" containerID="ba5c52abb0a377bea345a1da6bef0137070c0a6dfe996550bad543e2b5469636" Nov 25 10:55:32 crc kubenswrapper[4813]: I1125 10:55:32.236304 4813 scope.go:117] "RemoveContainer" containerID="798caaa475c1034e2ef39591a630f1bb0b528da32dd1cf7acc1726a570970c5c" Nov 25 10:55:33 crc kubenswrapper[4813]: I1125 10:55:33.638797 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d80d33-b519-4d67-97ba-1b8b828e917b" path="/var/lib/kubelet/pods/90d80d33-b519-4d67-97ba-1b8b828e917b/volumes" Nov 25 10:55:36 crc kubenswrapper[4813]: I1125 10:55:36.017377 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 10:55:36 crc kubenswrapper[4813]: I1125 10:55:36.018506 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:55:36 crc kubenswrapper[4813]: I1125 10:55:36.187519 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 10:55:37 crc kubenswrapper[4813]: I1125 10:55:37.466753 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:55:37 crc kubenswrapper[4813]: I1125 10:55:37.467134 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 10:55:37 crc kubenswrapper[4813]: I1125 10:55:37.639398 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 10:55:37 crc kubenswrapper[4813]: I1125 10:55:37.870879 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:39 crc kubenswrapper[4813]: I1125 10:55:39.871456 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:46 crc kubenswrapper[4813]: I1125 10:55:46.869880 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:46 crc kubenswrapper[4813]: I1125 10:55:46.869880 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:48 crc kubenswrapper[4813]: I1125 10:55:48.870024 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:48 crc kubenswrapper[4813]: I1125 10:55:48.870390 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:51 crc kubenswrapper[4813]: I1125 10:55:51.967042 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:55:51 crc kubenswrapper[4813]: I1125 10:55:51.967369 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:55:56 crc kubenswrapper[4813]: I1125 10:55:56.869856 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:56 crc kubenswrapper[4813]: I1125 10:55:56.869917 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:58 crc kubenswrapper[4813]: I1125 10:55:58.870843 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:55:58 crc kubenswrapper[4813]: I1125 10:55:58.871007 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:56:06 crc kubenswrapper[4813]: I1125 10:56:06.870074 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:56:06 crc kubenswrapper[4813]: I1125 10:56:06.870298 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:56:06 crc kubenswrapper[4813]: I1125 10:56:06.870962 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:56:06 crc kubenswrapper[4813]: I1125 10:56:06.871738 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"1efb4c9781f1f4234bf85bc45997f3798dfacde25a7cc77afe7b93185efcee63"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:56:07 crc kubenswrapper[4813]: I1125 10:56:07.058964 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" containerID="cri-o://1efb4c9781f1f4234bf85bc45997f3798dfacde25a7cc77afe7b93185efcee63" gracePeriod=30 Nov 25 10:56:07 crc kubenswrapper[4813]: I1125 10:56:07.375428 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output=< Nov 25 10:56:07 crc kubenswrapper[4813]: WARNING: password retrieved from cluster failed authentication Nov 25 10:56:07 crc kubenswrapper[4813]: > Nov 25 10:56:07 crc kubenswrapper[4813]: E1125 10:56:07.474915 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 40s restarting failed container=galera pod=openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:56:07 crc kubenswrapper[4813]: I1125 10:56:07.505128 4813 generic.go:334] "Generic (PLEG): container finished" podID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerID="1efb4c9781f1f4234bf85bc45997f3798dfacde25a7cc77afe7b93185efcee63" exitCode=0 Nov 25 10:56:07 crc kubenswrapper[4813]: I1125 10:56:07.505180 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerDied","Data":"1efb4c9781f1f4234bf85bc45997f3798dfacde25a7cc77afe7b93185efcee63"} Nov 25 10:56:07 crc kubenswrapper[4813]: I1125 10:56:07.505464 4813 scope.go:117] "RemoveContainer" containerID="8fe3b5908bd882a43edcc879c4097ad5b7aa6d5ecfb8246545819a3dfe3577df" Nov 25 10:56:07 crc kubenswrapper[4813]: I1125 10:56:07.506117 4813 scope.go:117] "RemoveContainer" containerID="1efb4c9781f1f4234bf85bc45997f3798dfacde25a7cc77afe7b93185efcee63" Nov 25 10:56:07 crc kubenswrapper[4813]: E1125 10:56:07.506383 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 40s restarting failed container=galera pod=openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:56:08 crc kubenswrapper[4813]: I1125 10:56:08.869814 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:56:08 crc kubenswrapper[4813]: I1125 10:56:08.869884 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:56:08 crc kubenswrapper[4813]: I1125 10:56:08.869957 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:56:08 crc kubenswrapper[4813]: I1125 10:56:08.870586 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"0ce6e650b0e6f5717f39355ed96f5b66a51107fdd6cec7f90294b10781b58357"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:56:09 crc kubenswrapper[4813]: I1125 10:56:09.066911 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" containerID="cri-o://0ce6e650b0e6f5717f39355ed96f5b66a51107fdd6cec7f90294b10781b58357" gracePeriod=30 Nov 25 10:56:09 crc kubenswrapper[4813]: I1125 10:56:09.492836 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output=< Nov 25 10:56:09 crc kubenswrapper[4813]: WARNING: password retrieved from cluster failed authentication Nov 25 10:56:09 crc kubenswrapper[4813]: > Nov 25 10:56:09 crc kubenswrapper[4813]: E1125 10:56:09.596343 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 40s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:56:10 crc kubenswrapper[4813]: I1125 10:56:10.536033 4813 generic.go:334] "Generic (PLEG): container finished" podID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerID="0ce6e650b0e6f5717f39355ed96f5b66a51107fdd6cec7f90294b10781b58357" exitCode=0 Nov 25 10:56:10 crc kubenswrapper[4813]: I1125 10:56:10.536100 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerDied","Data":"0ce6e650b0e6f5717f39355ed96f5b66a51107fdd6cec7f90294b10781b58357"} Nov 25 10:56:10 crc kubenswrapper[4813]: I1125 10:56:10.537141 4813 scope.go:117] "RemoveContainer" containerID="80908b657a821f229fbc82bb8d2cb2e101a613cef69d3be9f8711253527ccbd9" Nov 25 10:56:10 crc kubenswrapper[4813]: I1125 10:56:10.537629 4813 scope.go:117] "RemoveContainer" containerID="0ce6e650b0e6f5717f39355ed96f5b66a51107fdd6cec7f90294b10781b58357" Nov 25 10:56:10 crc kubenswrapper[4813]: E1125 10:56:10.537915 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 40s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:56:16 crc kubenswrapper[4813]: I1125 10:56:16.017342 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:56:16 crc kubenswrapper[4813]: I1125 10:56:16.018712 4813 scope.go:117] "RemoveContainer" containerID="1efb4c9781f1f4234bf85bc45997f3798dfacde25a7cc77afe7b93185efcee63" Nov 25 10:56:16 crc kubenswrapper[4813]: E1125 10:56:16.018918 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 40s restarting failed container=galera pod=openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:56:17 crc kubenswrapper[4813]: I1125 10:56:17.465936 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:56:17 crc kubenswrapper[4813]: I1125 10:56:17.467130 4813 scope.go:117] "RemoveContainer" containerID="0ce6e650b0e6f5717f39355ed96f5b66a51107fdd6cec7f90294b10781b58357" Nov 25 10:56:17 crc kubenswrapper[4813]: E1125 10:56:17.467404 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 40s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:56:21 crc kubenswrapper[4813]: I1125 10:56:21.966982 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:56:21 crc kubenswrapper[4813]: I1125 10:56:21.967392 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:56:26 crc kubenswrapper[4813]: I1125 10:56:26.622569 4813 scope.go:117] "RemoveContainer" containerID="1efb4c9781f1f4234bf85bc45997f3798dfacde25a7cc77afe7b93185efcee63" Nov 25 10:56:26 crc kubenswrapper[4813]: E1125 10:56:26.624293 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 40s restarting failed container=galera pod=openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:56:30 crc kubenswrapper[4813]: I1125 10:56:30.621592 4813 scope.go:117] "RemoveContainer" containerID="0ce6e650b0e6f5717f39355ed96f5b66a51107fdd6cec7f90294b10781b58357" Nov 25 10:56:30 crc kubenswrapper[4813]: E1125 10:56:30.623013 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 40s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:56:40 crc kubenswrapper[4813]: I1125 10:56:40.622408 4813 scope.go:117] "RemoveContainer" containerID="1efb4c9781f1f4234bf85bc45997f3798dfacde25a7cc77afe7b93185efcee63" Nov 25 10:56:40 crc kubenswrapper[4813]: E1125 10:56:40.623324 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 40s restarting failed container=galera pod=openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:56:41 crc kubenswrapper[4813]: I1125 10:56:41.621779 4813 scope.go:117] "RemoveContainer" containerID="0ce6e650b0e6f5717f39355ed96f5b66a51107fdd6cec7f90294b10781b58357" Nov 25 10:56:41 crc kubenswrapper[4813]: E1125 10:56:41.622088 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 40s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:56:51 crc kubenswrapper[4813]: I1125 10:56:51.622175 4813 scope.go:117] "RemoveContainer" containerID="1efb4c9781f1f4234bf85bc45997f3798dfacde25a7cc77afe7b93185efcee63" Nov 25 10:56:51 crc kubenswrapper[4813]: I1125 10:56:51.878969 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerStarted","Data":"557a488d6cc0c793d961f1f7cd163956509a3712f60e929a91c2a6b7c7534930"} Nov 25 10:56:51 crc kubenswrapper[4813]: I1125 10:56:51.967365 4813 patch_prober.go:28] interesting pod/machine-config-daemon-knhz8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:56:51 crc kubenswrapper[4813]: I1125 10:56:51.967425 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:56:51 crc kubenswrapper[4813]: I1125 10:56:51.967464 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" Nov 25 10:56:51 crc kubenswrapper[4813]: I1125 10:56:51.968105 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d35cff04017f1dbbd2dc074e7ec64c0bf7970af72edd947e4aaa4314840c882a"} pod="openshift-machine-config-operator/machine-config-daemon-knhz8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:56:51 crc kubenswrapper[4813]: I1125 10:56:51.968167 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" podUID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerName="machine-config-daemon" containerID="cri-o://d35cff04017f1dbbd2dc074e7ec64c0bf7970af72edd947e4aaa4314840c882a" gracePeriod=600 Nov 25 10:56:52 crc kubenswrapper[4813]: I1125 10:56:52.621739 4813 scope.go:117] "RemoveContainer" containerID="0ce6e650b0e6f5717f39355ed96f5b66a51107fdd6cec7f90294b10781b58357" Nov 25 10:56:52 crc kubenswrapper[4813]: I1125 10:56:52.888859 4813 generic.go:334] "Generic (PLEG): container finished" podID="8ece7e9c-d49a-4348-98ec-bd6ab589f750" containerID="d35cff04017f1dbbd2dc074e7ec64c0bf7970af72edd947e4aaa4314840c882a" exitCode=0 Nov 25 10:56:52 crc kubenswrapper[4813]: I1125 10:56:52.888912 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerDied","Data":"d35cff04017f1dbbd2dc074e7ec64c0bf7970af72edd947e4aaa4314840c882a"} Nov 25 10:56:52 crc kubenswrapper[4813]: I1125 10:56:52.888942 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-knhz8" event={"ID":"8ece7e9c-d49a-4348-98ec-bd6ab589f750","Type":"ContainerStarted","Data":"0e86184271cd637ee5cbf4948966949e41bf9451713b69f14017a065ff5c5fd1"} Nov 25 10:56:52 crc kubenswrapper[4813]: I1125 10:56:52.888956 4813 scope.go:117] "RemoveContainer" containerID="aa994bf2afc77b306a9a9dd90fad6893b4b3c7e60546773c6c8bfb41dfb47486" Nov 25 10:56:53 crc kubenswrapper[4813]: I1125 10:56:53.900093 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerStarted","Data":"287d0a488db3e43083bbc400093ccf672e4338738a7bcf5a4c9094d9867b55b0"} Nov 25 10:56:56 crc kubenswrapper[4813]: I1125 10:56:56.017636 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:56:56 crc kubenswrapper[4813]: I1125 10:56:56.018365 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 10:56:56 crc kubenswrapper[4813]: I1125 10:56:56.614325 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 10:56:57 crc kubenswrapper[4813]: I1125 10:56:57.466907 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:56:57 crc kubenswrapper[4813]: I1125 10:56:57.467311 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 10:56:57 crc kubenswrapper[4813]: I1125 10:56:57.618402 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 10:56:57 crc kubenswrapper[4813]: I1125 10:56:57.870267 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:56:58 crc kubenswrapper[4813]: I1125 10:56:58.880456 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:06 crc kubenswrapper[4813]: I1125 10:57:06.869501 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:06 crc kubenswrapper[4813]: I1125 10:57:06.870253 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:08 crc kubenswrapper[4813]: I1125 10:57:08.871354 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:08 crc kubenswrapper[4813]: I1125 10:57:08.872233 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:16 crc kubenswrapper[4813]: I1125 10:57:16.869853 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:16 crc kubenswrapper[4813]: I1125 10:57:16.869878 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:18 crc kubenswrapper[4813]: I1125 10:57:18.870463 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:18 crc kubenswrapper[4813]: I1125 10:57:18.870489 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:26 crc kubenswrapper[4813]: I1125 10:57:26.870097 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:26 crc kubenswrapper[4813]: I1125 10:57:26.870178 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:26 crc kubenswrapper[4813]: I1125 10:57:26.872522 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:57:26 crc kubenswrapper[4813]: I1125 10:57:26.873868 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"557a488d6cc0c793d961f1f7cd163956509a3712f60e929a91c2a6b7c7534930"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:57:27 crc kubenswrapper[4813]: I1125 10:57:27.065262 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" containerID="cri-o://557a488d6cc0c793d961f1f7cd163956509a3712f60e929a91c2a6b7c7534930" gracePeriod=30 Nov 25 10:57:27 crc kubenswrapper[4813]: I1125 10:57:27.396809 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output=< Nov 25 10:57:27 crc kubenswrapper[4813]: WARNING: password retrieved from cluster failed authentication Nov 25 10:57:27 crc kubenswrapper[4813]: > Nov 25 10:57:28 crc kubenswrapper[4813]: I1125 10:57:28.195147 4813 generic.go:334] "Generic (PLEG): container finished" podID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerID="557a488d6cc0c793d961f1f7cd163956509a3712f60e929a91c2a6b7c7534930" exitCode=0 Nov 25 10:57:28 crc kubenswrapper[4813]: I1125 10:57:28.195524 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerDied","Data":"557a488d6cc0c793d961f1f7cd163956509a3712f60e929a91c2a6b7c7534930"} Nov 25 10:57:28 crc kubenswrapper[4813]: I1125 10:57:28.195566 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerStarted","Data":"1d22e214c9ab699b58821fbdc46c816abe0d54f7f58e7aa46dd1e2dab8173866"} Nov 25 10:57:28 crc kubenswrapper[4813]: I1125 10:57:28.195595 4813 scope.go:117] "RemoveContainer" containerID="1efb4c9781f1f4234bf85bc45997f3798dfacde25a7cc77afe7b93185efcee63" Nov 25 10:57:28 crc kubenswrapper[4813]: I1125 10:57:28.871230 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:28 crc kubenswrapper[4813]: I1125 10:57:28.871230 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:28 crc kubenswrapper[4813]: I1125 10:57:28.871374 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:57:28 crc kubenswrapper[4813]: I1125 10:57:28.872168 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"287d0a488db3e43083bbc400093ccf672e4338738a7bcf5a4c9094d9867b55b0"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:57:29 crc kubenswrapper[4813]: I1125 10:57:29.229304 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" containerID="cri-o://287d0a488db3e43083bbc400093ccf672e4338738a7bcf5a4c9094d9867b55b0" gracePeriod=30 Nov 25 10:57:29 crc kubenswrapper[4813]: I1125 10:57:29.570762 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output=< Nov 25 10:57:29 crc kubenswrapper[4813]: WARNING: password retrieved from cluster failed authentication Nov 25 10:57:29 crc kubenswrapper[4813]: > Nov 25 10:57:30 crc kubenswrapper[4813]: I1125 10:57:30.223647 4813 generic.go:334] "Generic (PLEG): container finished" podID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerID="287d0a488db3e43083bbc400093ccf672e4338738a7bcf5a4c9094d9867b55b0" exitCode=0 Nov 25 10:57:30 crc kubenswrapper[4813]: I1125 10:57:30.223724 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerDied","Data":"287d0a488db3e43083bbc400093ccf672e4338738a7bcf5a4c9094d9867b55b0"} Nov 25 10:57:30 crc kubenswrapper[4813]: I1125 10:57:30.223979 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerStarted","Data":"95cab43509f6320e4ae80b5e283d20ea2800f128e636f107b873f9f64b3165bf"} Nov 25 10:57:30 crc kubenswrapper[4813]: I1125 10:57:30.224002 4813 scope.go:117] "RemoveContainer" containerID="0ce6e650b0e6f5717f39355ed96f5b66a51107fdd6cec7f90294b10781b58357" Nov 25 10:57:30 crc kubenswrapper[4813]: E1125 10:57:30.782982 4813 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.91:45848->38.129.56.91:42617: read tcp 38.129.56.91:45848->38.129.56.91:42617: read: connection reset by peer Nov 25 10:57:31 crc kubenswrapper[4813]: E1125 10:57:31.281747 4813 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.91:45880->38.129.56.91:42617: write tcp 38.129.56.91:45880->38.129.56.91:42617: write: broken pipe Nov 25 10:57:36 crc kubenswrapper[4813]: I1125 10:57:36.017493 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 10:57:36 crc kubenswrapper[4813]: I1125 10:57:36.018191 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:57:36 crc kubenswrapper[4813]: I1125 10:57:36.216548 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 10:57:37 crc kubenswrapper[4813]: I1125 10:57:37.466808 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:57:37 crc kubenswrapper[4813]: I1125 10:57:37.466879 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 10:57:37 crc kubenswrapper[4813]: I1125 10:57:37.681283 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 10:57:37 crc kubenswrapper[4813]: I1125 10:57:37.871378 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:39 crc kubenswrapper[4813]: I1125 10:57:39.870840 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:46 crc kubenswrapper[4813]: I1125 10:57:46.870439 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:46 crc kubenswrapper[4813]: I1125 10:57:46.870488 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:48 crc kubenswrapper[4813]: I1125 10:57:48.870422 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:48 crc kubenswrapper[4813]: I1125 10:57:48.870609 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:56 crc kubenswrapper[4813]: I1125 10:57:56.870078 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:56 crc kubenswrapper[4813]: I1125 10:57:56.870934 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:58 crc kubenswrapper[4813]: I1125 10:57:58.870310 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:57:58 crc kubenswrapper[4813]: I1125 10:57:58.870880 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:58:06 crc kubenswrapper[4813]: I1125 10:58:06.869611 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:58:06 crc kubenswrapper[4813]: I1125 10:58:06.870212 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:58:06 crc kubenswrapper[4813]: I1125 10:58:06.871047 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"1d22e214c9ab699b58821fbdc46c816abe0d54f7f58e7aa46dd1e2dab8173866"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:58:06 crc kubenswrapper[4813]: I1125 10:58:06.871440 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:58:07 crc kubenswrapper[4813]: I1125 10:58:07.124624 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" containerID="cri-o://1d22e214c9ab699b58821fbdc46c816abe0d54f7f58e7aa46dd1e2dab8173866" gracePeriod=30 Nov 25 10:58:07 crc kubenswrapper[4813]: I1125 10:58:07.447553 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerName="galera" probeResult="failure" output=< Nov 25 10:58:07 crc kubenswrapper[4813]: WARNING: password retrieved from cluster failed authentication Nov 25 10:58:07 crc kubenswrapper[4813]: > Nov 25 10:58:07 crc kubenswrapper[4813]: E1125 10:58:07.548821 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:58:08 crc kubenswrapper[4813]: I1125 10:58:08.560266 4813 generic.go:334] "Generic (PLEG): container finished" podID="9005be17-9874-4f4f-bd91-39b3c74314ec" containerID="1d22e214c9ab699b58821fbdc46c816abe0d54f7f58e7aa46dd1e2dab8173866" exitCode=0 Nov 25 10:58:08 crc kubenswrapper[4813]: I1125 10:58:08.560931 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9005be17-9874-4f4f-bd91-39b3c74314ec","Type":"ContainerDied","Data":"1d22e214c9ab699b58821fbdc46c816abe0d54f7f58e7aa46dd1e2dab8173866"} Nov 25 10:58:08 crc kubenswrapper[4813]: I1125 10:58:08.561018 4813 scope.go:117] "RemoveContainer" containerID="557a488d6cc0c793d961f1f7cd163956509a3712f60e929a91c2a6b7c7534930" Nov 25 10:58:08 crc kubenswrapper[4813]: I1125 10:58:08.562503 4813 scope.go:117] "RemoveContainer" containerID="1d22e214c9ab699b58821fbdc46c816abe0d54f7f58e7aa46dd1e2dab8173866" Nov 25 10:58:08 crc kubenswrapper[4813]: E1125 10:58:08.564632 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:58:08 crc kubenswrapper[4813]: I1125 10:58:08.870936 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:58:08 crc kubenswrapper[4813]: I1125 10:58:08.871857 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output="command timed out" Nov 25 10:58:08 crc kubenswrapper[4813]: I1125 10:58:08.871916 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:58:08 crc kubenswrapper[4813]: I1125 10:58:08.872744 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"95cab43509f6320e4ae80b5e283d20ea2800f128e636f107b873f9f64b3165bf"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Nov 25 10:58:09 crc kubenswrapper[4813]: I1125 10:58:09.106295 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" containerID="cri-o://95cab43509f6320e4ae80b5e283d20ea2800f128e636f107b873f9f64b3165bf" gracePeriod=30 Nov 25 10:58:09 crc kubenswrapper[4813]: I1125 10:58:09.449732 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerName="galera" probeResult="failure" output=< Nov 25 10:58:09 crc kubenswrapper[4813]: WARNING: password retrieved from cluster failed authentication Nov 25 10:58:09 crc kubenswrapper[4813]: > Nov 25 10:58:09 crc kubenswrapper[4813]: E1125 10:58:09.522937 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:58:09 crc kubenswrapper[4813]: I1125 10:58:09.578463 4813 generic.go:334] "Generic (PLEG): container finished" podID="0444b7b3-af36-4fca-80c6-8348adc42a58" containerID="95cab43509f6320e4ae80b5e283d20ea2800f128e636f107b873f9f64b3165bf" exitCode=0 Nov 25 10:58:09 crc kubenswrapper[4813]: I1125 10:58:09.578534 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0444b7b3-af36-4fca-80c6-8348adc42a58","Type":"ContainerDied","Data":"95cab43509f6320e4ae80b5e283d20ea2800f128e636f107b873f9f64b3165bf"} Nov 25 10:58:09 crc kubenswrapper[4813]: I1125 10:58:09.578587 4813 scope.go:117] "RemoveContainer" containerID="287d0a488db3e43083bbc400093ccf672e4338738a7bcf5a4c9094d9867b55b0" Nov 25 10:58:09 crc kubenswrapper[4813]: I1125 10:58:09.579421 4813 scope.go:117] "RemoveContainer" containerID="95cab43509f6320e4ae80b5e283d20ea2800f128e636f107b873f9f64b3165bf" Nov 25 10:58:09 crc kubenswrapper[4813]: E1125 10:58:09.579671 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:58:16 crc kubenswrapper[4813]: I1125 10:58:16.017890 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 10:58:16 crc kubenswrapper[4813]: I1125 10:58:16.019157 4813 scope.go:117] "RemoveContainer" containerID="1d22e214c9ab699b58821fbdc46c816abe0d54f7f58e7aa46dd1e2dab8173866" Nov 25 10:58:16 crc kubenswrapper[4813]: E1125 10:58:16.019353 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:58:17 crc kubenswrapper[4813]: I1125 10:58:17.466145 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 10:58:17 crc kubenswrapper[4813]: I1125 10:58:17.467117 4813 scope.go:117] "RemoveContainer" containerID="95cab43509f6320e4ae80b5e283d20ea2800f128e636f107b873f9f64b3165bf" Nov 25 10:58:17 crc kubenswrapper[4813]: E1125 10:58:17.467774 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:58:27 crc kubenswrapper[4813]: I1125 10:58:27.623142 4813 scope.go:117] "RemoveContainer" containerID="1d22e214c9ab699b58821fbdc46c816abe0d54f7f58e7aa46dd1e2dab8173866" Nov 25 10:58:27 crc kubenswrapper[4813]: E1125 10:58:27.624449 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:58:31 crc kubenswrapper[4813]: I1125 10:58:31.622527 4813 scope.go:117] "RemoveContainer" containerID="95cab43509f6320e4ae80b5e283d20ea2800f128e636f107b873f9f64b3165bf" Nov 25 10:58:31 crc kubenswrapper[4813]: E1125 10:58:31.623417 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:58:39 crc kubenswrapper[4813]: I1125 10:58:39.625960 4813 scope.go:117] "RemoveContainer" containerID="1d22e214c9ab699b58821fbdc46c816abe0d54f7f58e7aa46dd1e2dab8173866" Nov 25 10:58:39 crc kubenswrapper[4813]: E1125 10:58:39.627142 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:58:43 crc kubenswrapper[4813]: I1125 10:58:43.627156 4813 scope.go:117] "RemoveContainer" containerID="95cab43509f6320e4ae80b5e283d20ea2800f128e636f107b873f9f64b3165bf" Nov 25 10:58:43 crc kubenswrapper[4813]: E1125 10:58:43.628091 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:58:54 crc kubenswrapper[4813]: I1125 10:58:54.622370 4813 scope.go:117] "RemoveContainer" containerID="1d22e214c9ab699b58821fbdc46c816abe0d54f7f58e7aa46dd1e2dab8173866" Nov 25 10:58:54 crc kubenswrapper[4813]: I1125 10:58:54.623269 4813 scope.go:117] "RemoveContainer" containerID="95cab43509f6320e4ae80b5e283d20ea2800f128e636f107b873f9f64b3165bf" Nov 25 10:58:54 crc kubenswrapper[4813]: E1125 10:58:54.623428 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:58:54 crc kubenswrapper[4813]: E1125 10:58:54.623588 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58" Nov 25 10:59:06 crc kubenswrapper[4813]: I1125 10:59:06.622547 4813 scope.go:117] "RemoveContainer" containerID="1d22e214c9ab699b58821fbdc46c816abe0d54f7f58e7aa46dd1e2dab8173866" Nov 25 10:59:06 crc kubenswrapper[4813]: E1125 10:59:06.623922 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-galera-0_openstack(9005be17-9874-4f4f-bd91-39b3c74314ec)\"" pod="openstack/openstack-galera-0" podUID="9005be17-9874-4f4f-bd91-39b3c74314ec" Nov 25 10:59:07 crc kubenswrapper[4813]: I1125 10:59:07.626187 4813 scope.go:117] "RemoveContainer" containerID="95cab43509f6320e4ae80b5e283d20ea2800f128e636f107b873f9f64b3165bf" Nov 25 10:59:07 crc kubenswrapper[4813]: E1125 10:59:07.626924 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"galera\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=galera pod=openstack-cell1-galera-0_openstack(0444b7b3-af36-4fca-80c6-8348adc42a58)\"" pod="openstack/openstack-cell1-galera-0" podUID="0444b7b3-af36-4fca-80c6-8348adc42a58"